May 15 12:51:45.895375 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 10:42:41 -00 2025 May 15 12:51:45.895399 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:51:45.895408 kernel: BIOS-provided physical RAM map: May 15 12:51:45.895417 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 15 12:51:45.895423 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 15 12:51:45.895429 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 12:51:45.895436 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 15 12:51:45.895442 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 15 12:51:45.895448 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 12:51:45.895454 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 15 12:51:45.895460 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 12:51:45.895466 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 12:51:45.895474 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 15 12:51:45.895480 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 12:51:45.895487 kernel: NX (Execute Disable) protection: active May 15 12:51:45.895494 kernel: APIC: Static calls initialized May 15 12:51:45.895500 kernel: SMBIOS 2.8 present. May 15 12:51:45.895509 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 15 12:51:45.895515 kernel: DMI: Memory slots populated: 1/1 May 15 12:51:45.895521 kernel: Hypervisor detected: KVM May 15 12:51:45.895528 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 12:51:45.895534 kernel: kvm-clock: using sched offset of 5897438260 cycles May 15 12:51:45.895541 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 12:51:45.895548 kernel: tsc: Detected 2000.000 MHz processor May 15 12:51:45.895554 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 12:51:45.895561 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 12:51:45.895568 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 15 12:51:45.895576 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 12:51:45.895583 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 12:51:45.895590 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 15 12:51:45.895596 kernel: Using GB pages for direct mapping May 15 12:51:45.895603 kernel: ACPI: Early table checksum verification disabled May 15 12:51:45.895609 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 15 12:51:45.895616 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:45.895622 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:45.895629 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:45.895637 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 15 12:51:45.895644 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:45.895650 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:45.895657 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:45.895666 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 12:51:45.895673 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 15 12:51:45.895682 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 15 12:51:45.895689 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 15 12:51:45.895695 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 15 12:51:45.895703 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 15 12:51:45.895709 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 15 12:51:45.895716 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 15 12:51:45.895723 kernel: No NUMA configuration found May 15 12:51:45.895729 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 15 12:51:45.895754 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] May 15 12:51:45.895761 kernel: Zone ranges: May 15 12:51:45.895768 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 12:51:45.895775 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 15 12:51:45.895781 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 15 12:51:45.895788 kernel: Device empty May 15 12:51:45.895795 kernel: Movable zone start for each node May 15 12:51:45.895801 kernel: Early memory node ranges May 15 12:51:45.895808 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 12:51:45.895815 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 15 12:51:45.895824 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 15 12:51:45.895831 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 15 12:51:45.895837 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 12:51:45.895844 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 12:51:45.895851 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 15 12:51:45.895857 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 12:51:45.895864 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 12:51:45.895871 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 12:51:45.895877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 12:51:45.895886 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 12:51:45.895893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 12:51:45.895899 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 12:51:45.895906 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 12:51:45.895912 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 12:51:45.895919 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 12:51:45.895926 kernel: TSC deadline timer available May 15 12:51:45.895932 kernel: CPU topo: Max. logical packages: 1 May 15 12:51:45.895939 kernel: CPU topo: Max. logical dies: 1 May 15 12:51:45.895948 kernel: CPU topo: Max. dies per package: 1 May 15 12:51:45.895954 kernel: CPU topo: Max. threads per core: 1 May 15 12:51:45.895961 kernel: CPU topo: Num. cores per package: 2 May 15 12:51:45.895967 kernel: CPU topo: Num. threads per package: 2 May 15 12:51:45.895974 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 15 12:51:45.895981 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 12:51:45.895988 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 12:51:45.896006 kernel: kvm-guest: setup PV sched yield May 15 12:51:45.896013 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 15 12:51:45.896022 kernel: Booting paravirtualized kernel on KVM May 15 12:51:45.896029 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 12:51:45.896035 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 12:51:45.896042 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 15 12:51:45.896049 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 15 12:51:45.896055 kernel: pcpu-alloc: [0] 0 1 May 15 12:51:45.896062 kernel: kvm-guest: PV spinlocks enabled May 15 12:51:45.896068 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 12:51:45.896081 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:51:45.896091 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 12:51:45.896098 kernel: random: crng init done May 15 12:51:45.896104 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 12:51:45.896111 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 12:51:45.896118 kernel: Fallback order for Node 0: 0 May 15 12:51:45.896125 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 15 12:51:45.896131 kernel: Policy zone: Normal May 15 12:51:45.896138 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 12:51:45.896147 kernel: software IO TLB: area num 2. May 15 12:51:45.896153 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 12:51:45.896160 kernel: ftrace: allocating 40065 entries in 157 pages May 15 12:51:45.896167 kernel: ftrace: allocated 157 pages with 5 groups May 15 12:51:45.896174 kernel: Dynamic Preempt: voluntary May 15 12:51:45.896180 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 12:51:45.896188 kernel: rcu: RCU event tracing is enabled. May 15 12:51:45.896195 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 12:51:45.896202 kernel: Trampoline variant of Tasks RCU enabled. May 15 12:51:45.896209 kernel: Rude variant of Tasks RCU enabled. May 15 12:51:45.896217 kernel: Tracing variant of Tasks RCU enabled. May 15 12:51:45.896224 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 12:51:45.896231 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 12:51:45.896238 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 12:51:45.896251 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 12:51:45.896260 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 12:51:45.896267 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 12:51:45.896274 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 12:51:45.896281 kernel: Console: colour VGA+ 80x25 May 15 12:51:45.896288 kernel: printk: legacy console [tty0] enabled May 15 12:51:45.896295 kernel: printk: legacy console [ttyS0] enabled May 15 12:51:45.896304 kernel: ACPI: Core revision 20240827 May 15 12:51:45.896312 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 12:51:45.896319 kernel: APIC: Switch to symmetric I/O mode setup May 15 12:51:45.896326 kernel: x2apic enabled May 15 12:51:45.896333 kernel: APIC: Switched APIC routing to: physical x2apic May 15 12:51:45.896342 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 12:51:45.896349 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 12:51:45.896357 kernel: kvm-guest: setup PV IPIs May 15 12:51:45.896364 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 12:51:45.896371 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 15 12:51:45.896378 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) May 15 12:51:45.896385 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 12:51:45.896392 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 12:51:45.896399 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 12:51:45.896408 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 12:51:45.896416 kernel: Spectre V2 : Mitigation: Retpolines May 15 12:51:45.896423 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 15 12:51:45.896430 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 15 12:51:45.896437 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 15 12:51:45.896444 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 12:51:45.896451 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 12:51:45.896458 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 12:51:45.896466 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 12:51:45.896475 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 12:51:45.896482 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 12:51:45.896489 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 12:51:45.896496 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 12:51:45.896503 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 15 12:51:45.896512 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 12:51:45.896519 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 15 12:51:45.896526 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 15 12:51:45.896535 kernel: Freeing SMP alternatives memory: 32K May 15 12:51:45.896542 kernel: pid_max: default: 32768 minimum: 301 May 15 12:51:45.896549 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 12:51:45.896556 kernel: landlock: Up and running. May 15 12:51:45.896562 kernel: SELinux: Initializing. May 15 12:51:45.896569 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:51:45.896576 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 12:51:45.896583 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 15 12:51:45.896590 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 12:51:45.896599 kernel: ... version: 0 May 15 12:51:45.896606 kernel: ... bit width: 48 May 15 12:51:45.896613 kernel: ... generic registers: 6 May 15 12:51:45.896620 kernel: ... value mask: 0000ffffffffffff May 15 12:51:45.896627 kernel: ... max period: 00007fffffffffff May 15 12:51:45.896634 kernel: ... fixed-purpose events: 0 May 15 12:51:45.896640 kernel: ... event mask: 000000000000003f May 15 12:51:45.896647 kernel: signal: max sigframe size: 3376 May 15 12:51:45.896654 kernel: rcu: Hierarchical SRCU implementation. May 15 12:51:45.896663 kernel: rcu: Max phase no-delay instances is 400. May 15 12:51:45.896670 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 12:51:45.896677 kernel: smp: Bringing up secondary CPUs ... May 15 12:51:45.896684 kernel: smpboot: x86: Booting SMP configuration: May 15 12:51:45.896690 kernel: .... node #0, CPUs: #1 May 15 12:51:45.896697 kernel: smp: Brought up 1 node, 2 CPUs May 15 12:51:45.896704 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 15 12:51:45.896711 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 227296K reserved, 0K cma-reserved) May 15 12:51:45.896718 kernel: devtmpfs: initialized May 15 12:51:45.896727 kernel: x86/mm: Memory block size: 128MB May 15 12:51:45.896734 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 12:51:45.896758 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 12:51:45.896765 kernel: pinctrl core: initialized pinctrl subsystem May 15 12:51:45.896772 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 12:51:45.896779 kernel: audit: initializing netlink subsys (disabled) May 15 12:51:45.896786 kernel: audit: type=2000 audit(1747313503.573:1): state=initialized audit_enabled=0 res=1 May 15 12:51:45.896793 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 12:51:45.896800 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 12:51:45.896809 kernel: cpuidle: using governor menu May 15 12:51:45.896816 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 12:51:45.896822 kernel: dca service started, version 1.12.1 May 15 12:51:45.896829 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 15 12:51:45.896836 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 15 12:51:45.896843 kernel: PCI: Using configuration type 1 for base access May 15 12:51:45.896850 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 12:51:45.896857 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 12:51:45.896864 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 12:51:45.896873 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 12:51:45.896879 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 12:51:45.896886 kernel: ACPI: Added _OSI(Module Device) May 15 12:51:45.896893 kernel: ACPI: Added _OSI(Processor Device) May 15 12:51:45.896900 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 12:51:45.896907 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 12:51:45.896913 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 12:51:45.896920 kernel: ACPI: Interpreter enabled May 15 12:51:45.896927 kernel: ACPI: PM: (supports S0 S3 S5) May 15 12:51:45.896936 kernel: ACPI: Using IOAPIC for interrupt routing May 15 12:51:45.896943 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 12:51:45.896950 kernel: PCI: Using E820 reservations for host bridge windows May 15 12:51:45.896957 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 12:51:45.896963 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 12:51:45.897150 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 12:51:45.897265 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 12:51:45.897373 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 12:51:45.897386 kernel: PCI host bridge to bus 0000:00 May 15 12:51:45.897507 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 12:51:45.897606 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 12:51:45.897702 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 12:51:45.897831 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 15 12:51:45.897930 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 12:51:45.898025 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 15 12:51:45.898126 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 12:51:45.898258 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 15 12:51:45.898384 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 15 12:51:45.898493 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 15 12:51:45.898598 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 15 12:51:45.898702 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 15 12:51:45.898832 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 12:51:45.898954 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 15 12:51:45.899079 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] May 15 12:51:45.899186 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 15 12:51:45.899292 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 15 12:51:45.899409 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 12:51:45.899514 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] May 15 12:51:45.899624 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 15 12:51:45.899729 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 15 12:51:45.899887 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 15 12:51:45.900008 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 15 12:51:45.900114 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 12:51:45.900228 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 15 12:51:45.900338 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] May 15 12:51:45.900442 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] May 15 12:51:45.900560 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 15 12:51:45.900666 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 15 12:51:45.900676 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 12:51:45.900683 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 12:51:45.900690 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 12:51:45.900697 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 12:51:45.900707 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 12:51:45.900714 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 12:51:45.900722 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 12:51:45.900728 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 12:51:45.900809 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 12:51:45.900818 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 12:51:45.900825 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 12:51:45.900832 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 12:51:45.900839 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 12:51:45.900850 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 12:51:45.900857 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 12:51:45.900864 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 12:51:45.900871 kernel: iommu: Default domain type: Translated May 15 12:51:45.900878 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 12:51:45.900885 kernel: PCI: Using ACPI for IRQ routing May 15 12:51:45.900892 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 12:51:45.900899 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 15 12:51:45.900906 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 15 12:51:45.901027 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 12:51:45.901133 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 12:51:45.901288 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 12:51:45.901299 kernel: vgaarb: loaded May 15 12:51:45.901306 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 12:51:45.901314 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 12:51:45.901321 kernel: clocksource: Switched to clocksource kvm-clock May 15 12:51:45.901328 kernel: VFS: Disk quotas dquot_6.6.0 May 15 12:51:45.901339 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 12:51:45.901346 kernel: pnp: PnP ACPI init May 15 12:51:45.901465 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 12:51:45.901476 kernel: pnp: PnP ACPI: found 5 devices May 15 12:51:45.901484 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 12:51:45.901491 kernel: NET: Registered PF_INET protocol family May 15 12:51:45.901498 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 12:51:45.901505 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 12:51:45.901515 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 12:51:45.901523 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 12:51:45.901530 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 12:51:45.901537 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 12:51:45.901544 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:51:45.901551 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 12:51:45.901558 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 12:51:45.901565 kernel: NET: Registered PF_XDP protocol family May 15 12:51:45.901662 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 12:51:45.901781 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 12:51:45.901880 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 12:51:45.901975 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 15 12:51:45.902070 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 12:51:45.902165 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 15 12:51:45.902174 kernel: PCI: CLS 0 bytes, default 64 May 15 12:51:45.902181 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 15 12:51:45.902188 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 15 12:51:45.902199 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 15 12:51:45.902207 kernel: Initialise system trusted keyrings May 15 12:51:45.902214 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 12:51:45.902221 kernel: Key type asymmetric registered May 15 12:51:45.902228 kernel: Asymmetric key parser 'x509' registered May 15 12:51:45.902235 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 12:51:45.902242 kernel: io scheduler mq-deadline registered May 15 12:51:45.902249 kernel: io scheduler kyber registered May 15 12:51:45.902256 kernel: io scheduler bfq registered May 15 12:51:45.902265 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 12:51:45.902273 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 12:51:45.902280 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 12:51:45.902287 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 12:51:45.902295 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 12:51:45.902302 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 12:51:45.902309 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 12:51:45.902316 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 12:51:45.902324 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 12:51:45.902436 kernel: rtc_cmos 00:03: RTC can wake from S4 May 15 12:51:45.902538 kernel: rtc_cmos 00:03: registered as rtc0 May 15 12:51:45.902637 kernel: rtc_cmos 00:03: setting system clock to 2025-05-15T12:51:45 UTC (1747313505) May 15 12:51:45.902761 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 12:51:45.902772 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 12:51:45.902780 kernel: NET: Registered PF_INET6 protocol family May 15 12:51:45.902787 kernel: Segment Routing with IPv6 May 15 12:51:45.902794 kernel: In-situ OAM (IOAM) with IPv6 May 15 12:51:45.902804 kernel: NET: Registered PF_PACKET protocol family May 15 12:51:45.902812 kernel: Key type dns_resolver registered May 15 12:51:45.902819 kernel: IPI shorthand broadcast: enabled May 15 12:51:45.902826 kernel: sched_clock: Marking stable (2700002460, 213925250)->(2947558970, -33631260) May 15 12:51:45.902833 kernel: registered taskstats version 1 May 15 12:51:45.902840 kernel: Loading compiled-in X.509 certificates May 15 12:51:45.902848 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 05e05785144663be6df1db78301487421c4773b6' May 15 12:51:45.902854 kernel: Demotion targets for Node 0: null May 15 12:51:45.902862 kernel: Key type .fscrypt registered May 15 12:51:45.902870 kernel: Key type fscrypt-provisioning registered May 15 12:51:45.902877 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 12:51:45.902884 kernel: ima: Allocated hash algorithm: sha1 May 15 12:51:45.902891 kernel: ima: No architecture policies found May 15 12:51:45.902898 kernel: clk: Disabling unused clocks May 15 12:51:45.902906 kernel: Warning: unable to open an initial console. May 15 12:51:45.902914 kernel: Freeing unused kernel image (initmem) memory: 54416K May 15 12:51:45.902921 kernel: Write protecting the kernel read-only data: 24576k May 15 12:51:45.902928 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 15 12:51:45.902937 kernel: Run /init as init process May 15 12:51:45.902945 kernel: with arguments: May 15 12:51:45.902952 kernel: /init May 15 12:51:45.902959 kernel: with environment: May 15 12:51:45.902966 kernel: HOME=/ May 15 12:51:45.902986 kernel: TERM=linux May 15 12:51:45.902996 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 12:51:45.903004 systemd[1]: Successfully made /usr/ read-only. May 15 12:51:45.903017 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:51:45.903026 systemd[1]: Detected virtualization kvm. May 15 12:51:45.903034 systemd[1]: Detected architecture x86-64. May 15 12:51:45.903041 systemd[1]: Running in initrd. May 15 12:51:45.903049 systemd[1]: No hostname configured, using default hostname. May 15 12:51:45.903057 systemd[1]: Hostname set to . May 15 12:51:45.903065 systemd[1]: Initializing machine ID from random generator. May 15 12:51:45.903073 systemd[1]: Queued start job for default target initrd.target. May 15 12:51:45.903083 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:51:45.903091 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:51:45.903101 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 12:51:45.903109 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:51:45.903117 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 12:51:45.903126 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 12:51:45.903135 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 12:51:45.903145 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 12:51:45.903153 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:51:45.903161 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:51:45.903169 systemd[1]: Reached target paths.target - Path Units. May 15 12:51:45.903177 systemd[1]: Reached target slices.target - Slice Units. May 15 12:51:45.903185 systemd[1]: Reached target swap.target - Swaps. May 15 12:51:45.903193 systemd[1]: Reached target timers.target - Timer Units. May 15 12:51:45.903201 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:51:45.903211 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:51:45.903219 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 12:51:45.903227 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 12:51:45.903234 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:51:45.903242 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:51:45.903251 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:51:45.903261 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:51:45.903269 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 12:51:45.903277 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:51:45.903285 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 12:51:45.903294 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 12:51:45.903302 systemd[1]: Starting systemd-fsck-usr.service... May 15 12:51:45.903310 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:51:45.903318 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:51:45.903328 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:51:45.903336 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 12:51:45.903345 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:51:45.903353 systemd[1]: Finished systemd-fsck-usr.service. May 15 12:51:45.903383 systemd-journald[206]: Collecting audit messages is disabled. May 15 12:51:45.903402 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 12:51:45.903411 systemd-journald[206]: Journal started May 15 12:51:45.903432 systemd-journald[206]: Runtime Journal (/run/log/journal/499b83200cd747c9b94773860c0cd5ac) is 8M, max 78.5M, 70.5M free. May 15 12:51:45.879671 systemd-modules-load[208]: Inserted module 'overlay' May 15 12:51:45.905841 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:51:45.917855 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:51:45.996596 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 12:51:45.996628 kernel: Bridge firewalling registered May 15 12:51:45.940642 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 12:51:45.949181 systemd-modules-load[208]: Inserted module 'br_netfilter' May 15 12:51:45.958920 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 12:51:45.997165 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:51:45.998234 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:51:45.999478 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:51:46.003898 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 12:51:46.006849 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:51:46.016587 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:51:46.023504 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:51:46.028716 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:51:46.033788 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:51:46.043893 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:51:46.045260 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 12:51:46.068167 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 12:51:46.076580 systemd-resolved[236]: Positive Trust Anchors: May 15 12:51:46.077259 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:51:46.077287 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:51:46.082439 systemd-resolved[236]: Defaulting to hostname 'linux'. May 15 12:51:46.083429 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:51:46.084253 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:51:46.159805 kernel: SCSI subsystem initialized May 15 12:51:46.168761 kernel: Loading iSCSI transport class v2.0-870. May 15 12:51:46.179769 kernel: iscsi: registered transport (tcp) May 15 12:51:46.200112 kernel: iscsi: registered transport (qla4xxx) May 15 12:51:46.200190 kernel: QLogic iSCSI HBA Driver May 15 12:51:46.219591 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:51:46.233893 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:51:46.236668 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:51:46.288223 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 12:51:46.290475 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 12:51:46.341774 kernel: raid6: avx2x4 gen() 34857 MB/s May 15 12:51:46.359769 kernel: raid6: avx2x2 gen() 32566 MB/s May 15 12:51:46.378184 kernel: raid6: avx2x1 gen() 23361 MB/s May 15 12:51:46.378217 kernel: raid6: using algorithm avx2x4 gen() 34857 MB/s May 15 12:51:46.397164 kernel: raid6: .... xor() 4897 MB/s, rmw enabled May 15 12:51:46.397255 kernel: raid6: using avx2x2 recovery algorithm May 15 12:51:46.416774 kernel: xor: automatically using best checksumming function avx May 15 12:51:46.547792 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 12:51:46.555109 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 12:51:46.557353 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:51:46.578281 systemd-udevd[454]: Using default interface naming scheme 'v255'. May 15 12:51:46.583239 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:51:46.585821 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 12:51:46.610292 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation May 15 12:51:46.638014 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:51:46.639838 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:51:46.695458 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:51:46.699894 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 12:51:46.765761 kernel: libata version 3.00 loaded. May 15 12:51:46.767763 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues May 15 12:51:46.932254 kernel: cryptd: max_cpu_qlen set to 1000 May 15 12:51:46.932278 kernel: ahci 0000:00:1f.2: version 3.0 May 15 12:51:46.992791 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 12:51:46.992816 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 15 12:51:46.992966 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 15 12:51:46.993092 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 12:51:46.993217 kernel: scsi host1: ahci May 15 12:51:46.993357 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 15 12:51:46.993369 kernel: scsi host0: Virtio SCSI HBA May 15 12:51:46.993489 kernel: AES CTR mode by8 optimization enabled May 15 12:51:46.993504 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 15 12:51:47.017489 kernel: scsi host2: ahci May 15 12:51:47.017667 kernel: scsi host3: ahci May 15 12:51:47.017878 kernel: scsi host4: ahci May 15 12:51:47.018015 kernel: scsi host5: ahci May 15 12:51:47.018145 kernel: scsi host6: ahci May 15 12:51:47.018285 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 0 May 15 12:51:47.018297 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 0 May 15 12:51:47.018308 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 0 May 15 12:51:47.018318 kernel: sd 0:0:0:0: Power-on or device reset occurred May 15 12:51:47.018458 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 0 May 15 12:51:47.018469 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 15 12:51:47.018598 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 0 May 15 12:51:47.018609 kernel: sd 0:0:0:0: [sda] Write Protect is off May 15 12:51:47.018757 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 0 May 15 12:51:47.018771 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 15 12:51:47.018908 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 15 12:51:47.019064 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 12:51:47.019075 kernel: GPT:9289727 != 167739391 May 15 12:51:47.019084 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 12:51:47.019094 kernel: GPT:9289727 != 167739391 May 15 12:51:47.019103 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 12:51:47.019117 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:51:47.019126 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 15 12:51:46.966852 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:51:46.966981 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:51:46.968721 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:51:46.971824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:51:46.974706 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 12:51:47.064472 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:51:47.305550 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 15 12:51:47.305618 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 12:51:47.305631 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 12:51:47.305641 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 12:51:47.312760 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 12:51:47.312789 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 12:51:47.374136 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 15 12:51:47.383668 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 15 12:51:47.384548 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 12:51:47.392634 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 15 12:51:47.393260 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 15 12:51:47.403003 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 12:51:47.405183 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:51:47.405837 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:51:47.407120 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:51:47.409175 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 12:51:47.411909 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 12:51:47.426480 disk-uuid[630]: Primary Header is updated. May 15 12:51:47.426480 disk-uuid[630]: Secondary Entries is updated. May 15 12:51:47.426480 disk-uuid[630]: Secondary Header is updated. May 15 12:51:47.430083 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 12:51:47.434775 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:51:47.453759 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:51:48.448502 disk-uuid[633]: The operation has completed successfully. May 15 12:51:48.449536 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 12:51:48.500516 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 12:51:48.500644 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 12:51:48.523906 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 12:51:48.539888 sh[652]: Success May 15 12:51:48.557068 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 12:51:48.557131 kernel: device-mapper: uevent: version 1.0.3 May 15 12:51:48.560103 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 12:51:48.569920 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 15 12:51:48.614252 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 12:51:48.616818 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 12:51:48.632557 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 12:51:48.643356 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 12:51:48.643391 kernel: BTRFS: device fsid 2d504097-db49-4d66-a0d5-eeb665b21004 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (664) May 15 12:51:48.650077 kernel: BTRFS info (device dm-0): first mount of filesystem 2d504097-db49-4d66-a0d5-eeb665b21004 May 15 12:51:48.650108 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 12:51:48.650157 kernel: BTRFS info (device dm-0): using free-space-tree May 15 12:51:48.658100 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 12:51:48.659061 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 12:51:48.659915 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 12:51:48.660615 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 12:51:48.663196 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 12:51:48.686761 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (697) May 15 12:51:48.690908 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:51:48.690945 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:51:48.692640 kernel: BTRFS info (device sda6): using free-space-tree May 15 12:51:48.701763 kernel: BTRFS info (device sda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:51:48.703051 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 12:51:48.706942 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 12:51:48.777014 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:51:48.783561 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:51:48.809977 ignition[763]: Ignition 2.21.0 May 15 12:51:48.810676 ignition[763]: Stage: fetch-offline May 15 12:51:48.810706 ignition[763]: no configs at "/usr/lib/ignition/base.d" May 15 12:51:48.810715 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:48.810805 ignition[763]: parsed url from cmdline: "" May 15 12:51:48.813420 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:51:48.810809 ignition[763]: no config URL provided May 15 12:51:48.810813 ignition[763]: reading system config file "/usr/lib/ignition/user.ign" May 15 12:51:48.810821 ignition[763]: no config at "/usr/lib/ignition/user.ign" May 15 12:51:48.810825 ignition[763]: failed to fetch config: resource requires networking May 15 12:51:48.810959 ignition[763]: Ignition finished successfully May 15 12:51:48.820280 systemd-networkd[838]: lo: Link UP May 15 12:51:48.820293 systemd-networkd[838]: lo: Gained carrier May 15 12:51:48.821678 systemd-networkd[838]: Enumeration completed May 15 12:51:48.821765 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:51:48.822398 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:51:48.822402 systemd-networkd[838]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:51:48.823250 systemd[1]: Reached target network.target - Network. May 15 12:51:48.824579 systemd-networkd[838]: eth0: Link UP May 15 12:51:48.824583 systemd-networkd[838]: eth0: Gained carrier May 15 12:51:48.824591 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:51:48.826291 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 12:51:48.848365 ignition[842]: Ignition 2.21.0 May 15 12:51:48.848377 ignition[842]: Stage: fetch May 15 12:51:48.848483 ignition[842]: no configs at "/usr/lib/ignition/base.d" May 15 12:51:48.848493 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:48.848556 ignition[842]: parsed url from cmdline: "" May 15 12:51:48.848559 ignition[842]: no config URL provided May 15 12:51:48.848564 ignition[842]: reading system config file "/usr/lib/ignition/user.ign" May 15 12:51:48.848572 ignition[842]: no config at "/usr/lib/ignition/user.ign" May 15 12:51:48.848599 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #1 May 15 12:51:48.848726 ignition[842]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 12:51:49.049645 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #2 May 15 12:51:49.049829 ignition[842]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 12:51:49.302820 systemd-networkd[838]: eth0: DHCPv4 address 172.236.126.108/24, gateway 172.236.126.1 acquired from 23.215.118.129 May 15 12:51:49.449980 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #3 May 15 12:51:49.542971 ignition[842]: PUT result: OK May 15 12:51:49.543026 ignition[842]: GET http://169.254.169.254/v1/user-data: attempt #1 May 15 12:51:49.656638 ignition[842]: GET result: OK May 15 12:51:49.657068 ignition[842]: parsing config with SHA512: 48f02057398a428cbc847253b66179a0982af99d4e08c7962cf3fa4d385afc35f2fea8e33cb883c864150cfcb15ba6bbdd480d8614c00ae5f5454db1745ecc1c May 15 12:51:49.661901 unknown[842]: fetched base config from "system" May 15 12:51:49.661912 unknown[842]: fetched base config from "system" May 15 12:51:49.662135 ignition[842]: fetch: fetch complete May 15 12:51:49.661917 unknown[842]: fetched user config from "akamai" May 15 12:51:49.662140 ignition[842]: fetch: fetch passed May 15 12:51:49.664947 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 12:51:49.662177 ignition[842]: Ignition finished successfully May 15 12:51:49.668850 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 12:51:49.711081 ignition[849]: Ignition 2.21.0 May 15 12:51:49.711093 ignition[849]: Stage: kargs May 15 12:51:49.711210 ignition[849]: no configs at "/usr/lib/ignition/base.d" May 15 12:51:49.711219 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:49.713619 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 12:51:49.711941 ignition[849]: kargs: kargs passed May 15 12:51:49.711982 ignition[849]: Ignition finished successfully May 15 12:51:49.715946 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 12:51:49.755057 ignition[855]: Ignition 2.21.0 May 15 12:51:49.755069 ignition[855]: Stage: disks May 15 12:51:49.755200 ignition[855]: no configs at "/usr/lib/ignition/base.d" May 15 12:51:49.755210 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:49.756755 ignition[855]: disks: disks passed May 15 12:51:49.758349 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 12:51:49.756825 ignition[855]: Ignition finished successfully May 15 12:51:49.759756 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 12:51:49.760339 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 12:51:49.761344 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:51:49.762517 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:51:49.763766 systemd[1]: Reached target basic.target - Basic System. May 15 12:51:49.765693 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 12:51:49.796237 systemd-fsck[863]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 15 12:51:49.800356 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 12:51:49.802646 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 12:51:49.910758 kernel: EXT4-fs (sda9): mounted filesystem f7dea4bd-2644-4592-b85b-330f322c4d2b r/w with ordered data mode. Quota mode: none. May 15 12:51:49.911239 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 12:51:49.912310 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 12:51:49.914063 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:51:49.916816 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 12:51:49.917794 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 12:51:49.917835 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 12:51:49.917859 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:51:49.924121 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 12:51:49.926517 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 12:51:49.935076 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (871) May 15 12:51:49.935116 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:51:49.937238 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:51:49.939107 kernel: BTRFS info (device sda6): using free-space-tree May 15 12:51:49.945991 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:51:49.982764 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory May 15 12:51:49.987673 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory May 15 12:51:49.991694 initrd-setup-root[909]: cut: /sysroot/etc/shadow: No such file or directory May 15 12:51:49.996397 initrd-setup-root[916]: cut: /sysroot/etc/gshadow: No such file or directory May 15 12:51:50.077637 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 12:51:50.079491 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 12:51:50.081199 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 12:51:50.095601 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 12:51:50.098746 kernel: BTRFS info (device sda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:51:50.113595 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 12:51:50.122494 ignition[988]: INFO : Ignition 2.21.0 May 15 12:51:50.122494 ignition[988]: INFO : Stage: mount May 15 12:51:50.124812 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:51:50.124812 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:50.124812 ignition[988]: INFO : mount: mount passed May 15 12:51:50.124812 ignition[988]: INFO : Ignition finished successfully May 15 12:51:50.126263 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 12:51:50.128681 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 12:51:50.573903 systemd-networkd[838]: eth0: Gained IPv6LL May 15 12:51:50.912863 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 12:51:50.939128 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (999) May 15 12:51:50.939175 kernel: BTRFS info (device sda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 12:51:50.942654 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 12:51:50.942679 kernel: BTRFS info (device sda6): using free-space-tree May 15 12:51:50.947956 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 12:51:50.971618 ignition[1016]: INFO : Ignition 2.21.0 May 15 12:51:50.971618 ignition[1016]: INFO : Stage: files May 15 12:51:50.973036 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:51:50.973036 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:50.973036 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping May 15 12:51:50.975206 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 12:51:50.975206 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 12:51:50.977132 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 12:51:50.977981 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 12:51:50.978849 unknown[1016]: wrote ssh authorized keys file for user: core May 15 12:51:50.979591 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 12:51:50.980485 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 12:51:50.981499 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 12:51:51.272409 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 12:51:51.572591 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 12:51:51.572591 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 12:51:51.575134 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 12:51:51.575134 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 12:51:51.575134 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 12:51:51.575134 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:51:51.575134 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 12:51:51.575134 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:51:51.575134 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 12:51:51.581027 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:51:51.581027 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 12:51:51.581027 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 12:51:51.581027 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 12:51:51.581027 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 12:51:51.581027 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 15 12:51:51.898953 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 12:51:53.066767 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 12:51:53.066767 ignition[1016]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 12:51:53.069431 ignition[1016]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:51:53.069431 ignition[1016]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 12:51:53.069431 ignition[1016]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 12:51:53.069431 ignition[1016]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 15 12:51:53.073893 ignition[1016]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 12:51:53.073893 ignition[1016]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 12:51:53.073893 ignition[1016]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 15 12:51:53.073893 ignition[1016]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 15 12:51:53.073893 ignition[1016]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 15 12:51:53.073893 ignition[1016]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 12:51:53.073893 ignition[1016]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 12:51:53.073893 ignition[1016]: INFO : files: files passed May 15 12:51:53.073893 ignition[1016]: INFO : Ignition finished successfully May 15 12:51:53.072048 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 12:51:53.074666 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 12:51:53.078896 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 12:51:53.092674 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 12:51:53.093955 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 12:51:53.097389 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:51:53.097389 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 12:51:53.099847 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 12:51:53.101292 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:51:53.103364 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 12:51:53.104734 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 12:51:53.147152 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 12:51:53.147279 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 12:51:53.148878 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 12:51:53.150026 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 12:51:53.151200 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 12:51:53.152049 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 12:51:53.182754 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:51:53.184620 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 12:51:53.208061 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 12:51:53.209703 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:51:53.210356 systemd[1]: Stopped target timers.target - Timer Units. May 15 12:51:53.212350 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 12:51:53.212624 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 12:51:53.214116 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 12:51:53.214989 systemd[1]: Stopped target basic.target - Basic System. May 15 12:51:53.216192 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 12:51:53.217247 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 12:51:53.218514 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 12:51:53.219994 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 12:51:53.221320 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 12:51:53.222528 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 12:51:53.223866 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 12:51:53.225087 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 12:51:53.226563 systemd[1]: Stopped target swap.target - Swaps. May 15 12:51:53.227566 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 12:51:53.227731 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 12:51:53.229498 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 12:51:53.230412 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:51:53.231583 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 12:51:53.231904 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:51:53.232958 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 12:51:53.233064 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 12:51:53.234673 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 12:51:53.234860 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 12:51:53.236436 systemd[1]: ignition-files.service: Deactivated successfully. May 15 12:51:53.236569 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 12:51:53.238819 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 12:51:53.241822 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 12:51:53.241942 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:51:53.249890 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 12:51:53.250405 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 12:51:53.250513 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:51:53.254830 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 12:51:53.255514 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 12:51:53.262306 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 12:51:53.266790 ignition[1070]: INFO : Ignition 2.21.0 May 15 12:51:53.266790 ignition[1070]: INFO : Stage: umount May 15 12:51:53.266790 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 12:51:53.266790 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 12:51:53.266150 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 12:51:53.296882 ignition[1070]: INFO : umount: umount passed May 15 12:51:53.296882 ignition[1070]: INFO : Ignition finished successfully May 15 12:51:53.275179 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 12:51:53.275329 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 12:51:53.298651 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 12:51:53.299358 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 12:51:53.299472 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 12:51:53.301534 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 12:51:53.301594 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 12:51:53.302191 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 12:51:53.302238 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 12:51:53.303405 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 12:51:53.303447 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 12:51:53.304522 systemd[1]: Stopped target network.target - Network. May 15 12:51:53.305662 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 12:51:53.305709 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 12:51:53.306723 systemd[1]: Stopped target paths.target - Path Units. May 15 12:51:53.307676 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 12:51:53.310783 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:51:53.311831 systemd[1]: Stopped target slices.target - Slice Units. May 15 12:51:53.312961 systemd[1]: Stopped target sockets.target - Socket Units. May 15 12:51:53.314218 systemd[1]: iscsid.socket: Deactivated successfully. May 15 12:51:53.314454 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 12:51:53.315375 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 12:51:53.315413 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 12:51:53.316535 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 12:51:53.316594 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 12:51:53.317646 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 12:51:53.317691 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 12:51:53.318678 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 12:51:53.318727 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 12:51:53.320103 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 12:51:53.321155 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 12:51:53.328241 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 12:51:53.328375 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 12:51:53.331822 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 12:51:53.332048 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 12:51:53.332176 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 12:51:53.334096 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 12:51:53.334598 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 12:51:53.335882 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 12:51:53.335923 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 12:51:53.338040 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 12:51:53.339390 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 12:51:53.339452 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 12:51:53.343671 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 12:51:53.343717 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 12:51:53.344473 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 12:51:53.344518 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 12:51:53.345854 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 12:51:53.345905 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:51:53.347178 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:51:53.348884 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 12:51:53.348946 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 12:51:53.364393 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 12:51:53.364517 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 12:51:53.365945 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 12:51:53.366105 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:51:53.367555 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 12:51:53.367611 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 12:51:53.369000 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 12:51:53.369037 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:51:53.370527 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 12:51:53.370578 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 12:51:53.372443 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 12:51:53.372490 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 12:51:53.373587 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 12:51:53.373633 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 12:51:53.376847 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 12:51:53.377579 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 12:51:53.377628 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:51:53.379833 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 12:51:53.379885 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:51:53.381482 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 12:51:53.381528 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:51:53.384938 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 15 12:51:53.384992 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 12:51:53.385037 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 12:51:53.393974 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 12:51:53.394107 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 12:51:53.396165 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 12:51:53.397901 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 12:51:53.415415 systemd[1]: Switching root. May 15 12:51:53.450037 systemd-journald[206]: Journal stopped May 15 12:51:54.504150 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). May 15 12:51:54.504176 kernel: SELinux: policy capability network_peer_controls=1 May 15 12:51:54.504188 kernel: SELinux: policy capability open_perms=1 May 15 12:51:54.504200 kernel: SELinux: policy capability extended_socket_class=1 May 15 12:51:54.504208 kernel: SELinux: policy capability always_check_network=0 May 15 12:51:54.504217 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 12:51:54.504226 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 12:51:54.504235 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 12:51:54.504244 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 12:51:54.504253 kernel: SELinux: policy capability userspace_initial_context=0 May 15 12:51:54.504264 kernel: audit: type=1403 audit(1747313513.579:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 12:51:54.504273 systemd[1]: Successfully loaded SELinux policy in 54.747ms. May 15 12:51:54.504284 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.140ms. May 15 12:51:54.504295 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 12:51:54.504305 systemd[1]: Detected virtualization kvm. May 15 12:51:54.504317 systemd[1]: Detected architecture x86-64. May 15 12:51:54.504327 systemd[1]: Detected first boot. May 15 12:51:54.504337 systemd[1]: Initializing machine ID from random generator. May 15 12:51:54.504346 zram_generator::config[1120]: No configuration found. May 15 12:51:54.504356 kernel: Guest personality initialized and is inactive May 15 12:51:54.504365 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 12:51:54.504375 kernel: Initialized host personality May 15 12:51:54.504386 kernel: NET: Registered PF_VSOCK protocol family May 15 12:51:54.504396 systemd[1]: Populated /etc with preset unit settings. May 15 12:51:54.504406 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 12:51:54.504416 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 12:51:54.504426 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 12:51:54.504436 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 12:51:54.504446 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 12:51:54.504458 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 12:51:54.504468 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 12:51:54.504478 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 12:51:54.504488 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 12:51:54.504498 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 12:51:54.504508 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 12:51:54.504518 systemd[1]: Created slice user.slice - User and Session Slice. May 15 12:51:54.504530 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 12:51:54.504540 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 12:51:54.504550 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 12:51:54.504560 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 12:51:54.504572 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 12:51:54.504583 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 12:51:54.504593 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 12:51:54.504604 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 12:51:54.504616 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 12:51:54.504626 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 12:51:54.504636 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 12:51:54.504646 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 12:51:54.504657 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 12:51:54.504667 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 12:51:54.504677 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 12:51:54.504687 systemd[1]: Reached target slices.target - Slice Units. May 15 12:51:54.504699 systemd[1]: Reached target swap.target - Swaps. May 15 12:51:54.504709 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 12:51:54.504719 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 12:51:54.504729 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 12:51:54.504759 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 12:51:54.504772 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 12:51:54.504783 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 12:51:54.504793 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 12:51:54.504803 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 12:51:54.504813 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 12:51:54.504823 systemd[1]: Mounting media.mount - External Media Directory... May 15 12:51:54.504838 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:54.504855 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 12:51:54.504872 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 12:51:54.504884 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 12:51:54.504899 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 12:51:54.504917 systemd[1]: Reached target machines.target - Containers. May 15 12:51:54.504934 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 12:51:54.504950 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:51:54.504965 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 12:51:54.504980 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 12:51:54.505001 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:51:54.505018 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:51:54.505034 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:51:54.505051 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 12:51:54.505067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:51:54.505084 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 12:51:54.505100 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 12:51:54.505117 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 12:51:54.505132 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 12:51:54.505145 systemd[1]: Stopped systemd-fsck-usr.service. May 15 12:51:54.505156 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:51:54.505166 kernel: fuse: init (API version 7.41) May 15 12:51:54.505177 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 12:51:54.505187 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 12:51:54.505197 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 12:51:54.505207 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 12:51:54.505217 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 12:51:54.505230 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 12:51:54.505244 systemd[1]: verity-setup.service: Deactivated successfully. May 15 12:51:54.505256 kernel: ACPI: bus type drm_connector registered May 15 12:51:54.505266 systemd[1]: Stopped verity-setup.service. May 15 12:51:54.505281 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:54.505324 systemd-journald[1203]: Collecting audit messages is disabled. May 15 12:51:54.505361 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 12:51:54.505378 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 12:51:54.505395 systemd-journald[1203]: Journal started May 15 12:51:54.505420 systemd-journald[1203]: Runtime Journal (/run/log/journal/e1983ab9c7cc4066879aa2af5110f355) is 8M, max 78.5M, 70.5M free. May 15 12:51:54.506718 kernel: loop: module loaded May 15 12:51:54.176332 systemd[1]: Queued start job for default target multi-user.target. May 15 12:51:54.197215 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 15 12:51:54.509131 systemd[1]: Started systemd-journald.service - Journal Service. May 15 12:51:54.198032 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 12:51:54.510567 systemd[1]: Mounted media.mount - External Media Directory. May 15 12:51:54.512555 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 12:51:54.513882 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 12:51:54.514502 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 12:51:54.515564 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 12:51:54.517150 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 12:51:54.519134 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 12:51:54.519393 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 12:51:54.520233 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:51:54.520441 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:51:54.521403 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:51:54.522067 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:51:54.523110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:51:54.524012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:51:54.526539 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 12:51:54.526775 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 12:51:54.527944 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:51:54.528207 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:51:54.529432 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 12:51:54.530324 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 12:51:54.531245 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 12:51:54.544904 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 12:51:54.548560 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 12:51:54.552474 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 12:51:54.556887 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 12:51:54.557806 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 12:51:54.557887 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 12:51:54.560426 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 12:51:54.568884 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 12:51:54.570015 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:51:54.572940 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 12:51:54.579138 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 12:51:54.580083 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:51:54.583875 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 12:51:54.584524 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:51:54.586206 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 12:51:54.592989 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 12:51:54.601175 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 12:51:54.603670 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 12:51:54.607015 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 12:51:54.630327 systemd-journald[1203]: Time spent on flushing to /var/log/journal/e1983ab9c7cc4066879aa2af5110f355 is 38.530ms for 998 entries. May 15 12:51:54.630327 systemd-journald[1203]: System Journal (/var/log/journal/e1983ab9c7cc4066879aa2af5110f355) is 8M, max 195.6M, 187.6M free. May 15 12:51:54.696010 systemd-journald[1203]: Received client request to flush runtime journal. May 15 12:51:54.696906 kernel: loop0: detected capacity change from 0 to 146240 May 15 12:51:54.696941 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 12:51:54.641284 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 12:51:54.643330 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 12:51:54.650925 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 12:51:54.658633 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 12:51:54.693426 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 12:51:54.702561 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 12:51:54.716284 kernel: loop1: detected capacity change from 0 to 218376 May 15 12:51:54.724559 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 12:51:54.738067 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 12:51:54.741456 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 12:51:54.758767 kernel: loop2: detected capacity change from 0 to 113872 May 15 12:51:54.801132 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. May 15 12:51:54.801152 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. May 15 12:51:54.808759 kernel: loop3: detected capacity change from 0 to 8 May 15 12:51:54.812831 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 12:51:54.828786 kernel: loop4: detected capacity change from 0 to 146240 May 15 12:51:54.849765 kernel: loop5: detected capacity change from 0 to 218376 May 15 12:51:54.867768 kernel: loop6: detected capacity change from 0 to 113872 May 15 12:51:54.883795 kernel: loop7: detected capacity change from 0 to 8 May 15 12:51:54.888996 (sd-merge)[1263]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 15 12:51:54.889757 (sd-merge)[1263]: Merged extensions into '/usr'. May 15 12:51:54.896943 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... May 15 12:51:54.897087 systemd[1]: Reloading... May 15 12:51:54.999811 zram_generator::config[1289]: No configuration found. May 15 12:51:55.136107 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:51:55.153640 ldconfig[1235]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 12:51:55.210897 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 12:51:55.211428 systemd[1]: Reloading finished in 313 ms. May 15 12:51:55.231702 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 12:51:55.232717 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 12:51:55.244864 systemd[1]: Starting ensure-sysext.service... May 15 12:51:55.248856 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 12:51:55.269935 systemd[1]: Reload requested from client PID 1332 ('systemctl') (unit ensure-sysext.service)... May 15 12:51:55.269955 systemd[1]: Reloading... May 15 12:51:55.285681 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 12:51:55.286293 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 12:51:55.286635 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 12:51:55.287122 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 12:51:55.288638 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 12:51:55.288987 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. May 15 12:51:55.289113 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. May 15 12:51:55.294803 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:51:55.294880 systemd-tmpfiles[1333]: Skipping /boot May 15 12:51:55.311856 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. May 15 12:51:55.311934 systemd-tmpfiles[1333]: Skipping /boot May 15 12:51:55.356764 zram_generator::config[1363]: No configuration found. May 15 12:51:55.442554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:51:55.512368 systemd[1]: Reloading finished in 242 ms. May 15 12:51:55.524751 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 12:51:55.542105 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 12:51:55.550057 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:51:55.552952 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 12:51:55.561948 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 12:51:55.566808 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 12:51:55.570657 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 12:51:55.578033 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 12:51:55.582782 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:55.582953 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:51:55.586592 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 12:51:55.594106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 12:51:55.598331 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 12:51:55.599550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:51:55.599650 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:51:55.605357 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 12:51:55.606807 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:55.610708 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:55.610879 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:51:55.611024 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:51:55.611096 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:51:55.611166 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:55.620289 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:55.620521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 12:51:55.628023 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 12:51:55.629909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 12:51:55.630052 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 12:51:55.630206 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 12:51:55.631464 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 12:51:55.633681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 12:51:55.638045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 12:51:55.647162 systemd[1]: Finished ensure-sysext.service. May 15 12:51:55.648769 systemd-udevd[1410]: Using default interface naming scheme 'v255'. May 15 12:51:55.660033 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 12:51:55.661966 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 12:51:55.662792 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 12:51:55.666936 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 12:51:55.667154 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 12:51:55.668480 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 12:51:55.668673 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 12:51:55.678105 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 12:51:55.678221 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 12:51:55.681078 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 12:51:55.686962 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 12:51:55.714874 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 12:51:55.715596 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 12:51:55.716264 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 12:51:55.720603 augenrules[1445]: No rules May 15 12:51:55.725998 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 12:51:55.727819 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 12:51:55.728811 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:51:55.729161 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:51:55.735916 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 12:51:55.836059 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 12:51:55.955415 systemd-networkd[1458]: lo: Link UP May 15 12:51:55.955429 systemd-networkd[1458]: lo: Gained carrier May 15 12:51:55.957761 kernel: mousedev: PS/2 mouse device common for all mice May 15 12:51:55.959118 systemd-networkd[1458]: Enumeration completed May 15 12:51:55.959689 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:51:55.959702 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 12:51:55.959825 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 12:51:55.964360 systemd-networkd[1458]: eth0: Link UP May 15 12:51:55.964547 systemd-networkd[1458]: eth0: Gained carrier May 15 12:51:55.964570 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 12:51:55.964878 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 12:51:55.969385 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 12:51:56.005319 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 12:51:56.032084 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 12:51:56.032786 systemd[1]: Reached target time-set.target - System Time Set. May 15 12:51:56.071288 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 12:51:56.074860 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 12:51:56.077782 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 15 12:51:56.078551 systemd-resolved[1409]: Positive Trust Anchors: May 15 12:51:56.078564 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 12:51:56.078592 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 12:51:56.083754 systemd-resolved[1409]: Defaulting to hostname 'linux'. May 15 12:51:56.085652 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 12:51:56.086361 systemd[1]: Reached target network.target - Network. May 15 12:51:56.086897 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 12:51:56.088088 systemd[1]: Reached target sysinit.target - System Initialization. May 15 12:51:56.089210 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 12:51:56.090444 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 12:51:56.091139 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 15 12:51:56.092959 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 12:51:56.093586 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 12:51:56.094554 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 12:51:56.095717 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 12:51:56.095765 systemd[1]: Reached target paths.target - Path Units. May 15 12:51:56.096568 systemd[1]: Reached target timers.target - Timer Units. May 15 12:51:56.100774 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 12:51:56.101762 kernel: ACPI: button: Power Button [PWRF] May 15 12:51:56.104324 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 12:51:56.108153 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 12:51:56.109356 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 12:51:56.110184 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 12:51:56.114531 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 12:51:56.115981 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 12:51:56.118839 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 12:51:56.120622 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 12:51:56.126761 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 12:51:56.130925 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 12:51:56.134915 systemd[1]: Reached target sockets.target - Socket Units. May 15 12:51:56.135602 systemd[1]: Reached target basic.target - Basic System. May 15 12:51:56.136276 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 12:51:56.136312 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 12:51:56.139829 systemd[1]: Starting containerd.service - containerd container runtime... May 15 12:51:56.142604 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 12:51:56.145656 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 12:51:56.152036 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 12:51:56.156479 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 12:51:56.161132 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 12:51:56.162265 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 12:51:56.168891 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 15 12:51:56.179538 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 12:51:56.182899 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 12:51:56.186560 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 12:51:56.199938 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 12:51:56.207165 jq[1521]: false May 15 12:51:56.207672 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 12:51:56.210967 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 12:51:56.211473 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 12:51:56.217538 systemd[1]: Starting update-engine.service - Update Engine... May 15 12:51:56.230417 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 12:51:56.234252 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 12:51:56.235868 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 12:51:56.236355 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 12:51:56.239872 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Refreshing passwd entry cache May 15 12:51:56.239879 oslogin_cache_refresh[1523]: Refreshing passwd entry cache May 15 12:51:56.243392 update_engine[1532]: I20250515 12:51:56.241722 1532 main.cc:92] Flatcar Update Engine starting May 15 12:51:56.258213 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Failure getting users, quitting May 15 12:51:56.258213 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 12:51:56.258213 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Refreshing group entry cache May 15 12:51:56.257957 oslogin_cache_refresh[1523]: Failure getting users, quitting May 15 12:51:56.257975 oslogin_cache_refresh[1523]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 12:51:56.258020 oslogin_cache_refresh[1523]: Refreshing group entry cache May 15 12:51:56.261235 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Failure getting groups, quitting May 15 12:51:56.261235 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 12:51:56.261224 oslogin_cache_refresh[1523]: Failure getting groups, quitting May 15 12:51:56.261234 oslogin_cache_refresh[1523]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 12:51:56.269093 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 12:51:56.271924 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 12:51:56.274800 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 15 12:51:56.275038 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 15 12:51:56.291236 extend-filesystems[1522]: Found loop4 May 15 12:51:56.291236 extend-filesystems[1522]: Found loop5 May 15 12:51:56.291236 extend-filesystems[1522]: Found loop6 May 15 12:51:56.291236 extend-filesystems[1522]: Found loop7 May 15 12:51:56.291236 extend-filesystems[1522]: Found sda May 15 12:51:56.291236 extend-filesystems[1522]: Found sda1 May 15 12:51:56.291236 extend-filesystems[1522]: Found sda2 May 15 12:51:56.291236 extend-filesystems[1522]: Found sda3 May 15 12:51:56.291236 extend-filesystems[1522]: Found usr May 15 12:51:56.291236 extend-filesystems[1522]: Found sda4 May 15 12:51:56.291236 extend-filesystems[1522]: Found sda6 May 15 12:51:56.291236 extend-filesystems[1522]: Found sda7 May 15 12:51:56.291236 extend-filesystems[1522]: Found sda9 May 15 12:51:56.291236 extend-filesystems[1522]: Checking size of /dev/sda9 May 15 12:51:56.396825 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 15 12:51:56.396871 update_engine[1532]: I20250515 12:51:56.379916 1532 update_check_scheduler.cc:74] Next update check in 7m38s May 15 12:51:56.340991 dbus-daemon[1518]: [system] SELinux support is enabled May 15 12:51:56.337045 (ntainerd)[1555]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 12:51:56.397387 extend-filesystems[1522]: Resized partition /dev/sda9 May 15 12:51:56.401126 jq[1533]: true May 15 12:51:56.401226 tar[1537]: linux-amd64/LICENSE May 15 12:51:56.401226 tar[1537]: linux-amd64/helm May 15 12:51:56.341156 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 12:51:56.401493 jq[1558]: true May 15 12:51:56.401686 extend-filesystems[1565]: resize2fs 1.47.2 (1-Jan-2025) May 15 12:51:56.352186 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 12:51:56.352210 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 12:51:56.360205 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 12:51:56.360227 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 12:51:56.375270 systemd[1]: motdgen.service: Deactivated successfully. May 15 12:51:56.375520 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 12:51:56.382893 systemd[1]: Started update-engine.service - Update Engine. May 15 12:51:56.415821 coreos-metadata[1516]: May 15 12:51:56.415 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 15 12:51:56.417301 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 12:51:56.452226 systemd-networkd[1458]: eth0: DHCPv4 address 172.236.126.108/24, gateway 172.236.126.1 acquired from 23.215.118.129 May 15 12:51:56.455863 dbus-daemon[1518]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1458 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 15 12:51:56.458614 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 15 12:51:56.460544 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. May 15 12:51:56.463611 systemd-logind[1531]: New seat seat0. May 15 12:51:56.544207 bash[1589]: Updated "/home/core/.ssh/authorized_keys" May 15 12:51:56.585942 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 15 12:51:56.596328 extend-filesystems[1565]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 15 12:51:56.596328 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 10 May 15 12:51:56.596328 extend-filesystems[1565]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 15 12:51:56.688519 kernel: EDAC MC: Ver: 3.0.0 May 15 12:51:56.670387 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 12:51:56.690301 containerd[1555]: time="2025-05-15T12:51:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 12:51:56.690301 containerd[1555]: time="2025-05-15T12:51:56.612313790Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 12:51:56.690301 containerd[1555]: time="2025-05-15T12:51:56.656265780Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.72µs" May 15 12:51:56.690301 containerd[1555]: time="2025-05-15T12:51:56.656286420Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 12:51:56.690301 containerd[1555]: time="2025-05-15T12:51:56.656308370Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 12:51:56.690301 containerd[1555]: time="2025-05-15T12:51:56.656463680Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 12:51:56.690301 containerd[1555]: time="2025-05-15T12:51:56.656485490Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 12:51:56.690301 containerd[1555]: time="2025-05-15T12:51:56.656510580Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:51:56.690301 containerd[1555]: time="2025-05-15T12:51:56.656569740Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 12:51:56.690301 containerd[1555]: time="2025-05-15T12:51:56.656579710Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:51:56.690301 containerd[1555]: time="2025-05-15T12:51:56.662500310Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 12:51:56.690301 containerd[1555]: time="2025-05-15T12:51:56.662516430Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:51:56.690637 extend-filesystems[1522]: Resized filesystem in /dev/sda9 May 15 12:51:56.671211 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 12:51:56.697672 containerd[1555]: time="2025-05-15T12:51:56.662527510Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 12:51:56.697672 containerd[1555]: time="2025-05-15T12:51:56.662535330Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 12:51:56.697672 containerd[1555]: time="2025-05-15T12:51:56.662630220Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 12:51:56.697672 containerd[1555]: time="2025-05-15T12:51:56.665958530Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:51:56.697672 containerd[1555]: time="2025-05-15T12:51:56.665990710Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 12:51:56.697672 containerd[1555]: time="2025-05-15T12:51:56.666000500Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 12:51:56.697672 containerd[1555]: time="2025-05-15T12:51:56.666046820Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 12:51:56.697672 containerd[1555]: time="2025-05-15T12:51:56.666221340Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 12:51:56.697672 containerd[1555]: time="2025-05-15T12:51:56.666282220Z" level=info msg="metadata content store policy set" policy=shared May 15 12:51:56.697672 containerd[1555]: time="2025-05-15T12:51:56.674187040Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 12:51:56.697672 containerd[1555]: time="2025-05-15T12:51:56.674218420Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 12:51:56.697672 containerd[1555]: time="2025-05-15T12:51:56.674231510Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 12:51:56.697672 containerd[1555]: time="2025-05-15T12:51:56.674243250Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 12:51:56.676596 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674254030Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674262770Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674272230Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674282370Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674291930Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674301240Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674311210Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674322120Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674423220Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674440850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674453410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674462590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674471300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 12:51:56.706108 containerd[1555]: time="2025-05-15T12:51:56.674479590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 12:51:56.677286 systemd[1]: Started systemd-logind.service - User Login Management. May 15 12:51:56.706399 containerd[1555]: time="2025-05-15T12:51:56.674491320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 12:51:56.706399 containerd[1555]: time="2025-05-15T12:51:56.674499700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 12:51:56.706399 containerd[1555]: time="2025-05-15T12:51:56.674509200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 12:51:56.706399 containerd[1555]: time="2025-05-15T12:51:56.674517400Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 12:51:56.706399 containerd[1555]: time="2025-05-15T12:51:56.674526050Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 12:51:56.706399 containerd[1555]: time="2025-05-15T12:51:56.674577890Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 12:51:56.706399 containerd[1555]: time="2025-05-15T12:51:56.674589480Z" level=info msg="Start snapshots syncer" May 15 12:51:56.706399 containerd[1555]: time="2025-05-15T12:51:56.674609680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 12:51:56.681887 systemd-logind[1531]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 12:51:56.706563 containerd[1555]: time="2025-05-15T12:51:56.677881130Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 12:51:56.706563 containerd[1555]: time="2025-05-15T12:51:56.677927340Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 12:51:56.696440 systemd[1]: Starting sshkeys.service... May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678000540Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678095320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678113720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678122880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678131330Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678140890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678149720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678159860Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678178360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678187520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678196670Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678222980Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678232760Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 12:51:56.706768 containerd[1555]: time="2025-05-15T12:51:56.678240120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:51:56.701996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 12:51:56.711408 containerd[1555]: time="2025-05-15T12:51:56.678247820Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 12:51:56.711408 containerd[1555]: time="2025-05-15T12:51:56.678254680Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 12:51:56.711408 containerd[1555]: time="2025-05-15T12:51:56.678263790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 12:51:56.711408 containerd[1555]: time="2025-05-15T12:51:56.678272270Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 12:51:56.711408 containerd[1555]: time="2025-05-15T12:51:56.678286830Z" level=info msg="runtime interface created" May 15 12:51:56.711408 containerd[1555]: time="2025-05-15T12:51:56.678291560Z" level=info msg="created NRI interface" May 15 12:51:56.711408 containerd[1555]: time="2025-05-15T12:51:56.678298080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 12:51:56.711408 containerd[1555]: time="2025-05-15T12:51:56.678307310Z" level=info msg="Connect containerd service" May 15 12:51:56.711408 containerd[1555]: time="2025-05-15T12:51:56.678326200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 12:51:56.711408 containerd[1555]: time="2025-05-15T12:51:56.680254610Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 12:51:56.702768 systemd-logind[1531]: Watching system buttons on /dev/input/event2 (Power Button) May 15 12:51:57.613076 systemd-timesyncd[1440]: Contacted time server 23.131.160.7:123 (0.flatcar.pool.ntp.org). May 15 12:51:57.613130 systemd-timesyncd[1440]: Initial clock synchronization to Thu 2025-05-15 12:51:57.612946 UTC. May 15 12:51:57.615610 systemd-resolved[1409]: Clock change detected. Flushing caches. May 15 12:51:57.624458 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 12:51:57.628679 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 12:51:57.760843 coreos-metadata[1605]: May 15 12:51:57.760 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 15 12:51:57.791632 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 15 12:51:57.794993 dbus-daemon[1518]: [system] Successfully activated service 'org.freedesktop.hostname1' May 15 12:51:57.795588 dbus-daemon[1518]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1575 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 15 12:51:57.802033 systemd[1]: Starting polkit.service - Authorization Manager... May 15 12:51:57.849685 containerd[1555]: time="2025-05-15T12:51:57.848808653Z" level=info msg="Start subscribing containerd event" May 15 12:51:57.849685 containerd[1555]: time="2025-05-15T12:51:57.848853083Z" level=info msg="Start recovering state" May 15 12:51:57.849685 containerd[1555]: time="2025-05-15T12:51:57.849107713Z" level=info msg="Start event monitor" May 15 12:51:57.849685 containerd[1555]: time="2025-05-15T12:51:57.849121773Z" level=info msg="Start cni network conf syncer for default" May 15 12:51:57.849685 containerd[1555]: time="2025-05-15T12:51:57.849128533Z" level=info msg="Start streaming server" May 15 12:51:57.849685 containerd[1555]: time="2025-05-15T12:51:57.849140743Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 12:51:57.849685 containerd[1555]: time="2025-05-15T12:51:57.849147123Z" level=info msg="runtime interface starting up..." May 15 12:51:57.849685 containerd[1555]: time="2025-05-15T12:51:57.849152663Z" level=info msg="starting plugins..." May 15 12:51:57.849685 containerd[1555]: time="2025-05-15T12:51:57.849164993Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 12:51:57.849685 containerd[1555]: time="2025-05-15T12:51:57.849001703Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 12:51:57.849685 containerd[1555]: time="2025-05-15T12:51:57.849300983Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 12:51:57.849685 containerd[1555]: time="2025-05-15T12:51:57.849349833Z" level=info msg="containerd successfully booted in 0.388059s" May 15 12:51:57.849662 systemd[1]: Started containerd.service - containerd container runtime. May 15 12:51:57.851105 locksmithd[1568]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 12:51:57.890842 coreos-metadata[1605]: May 15 12:51:57.889 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 15 12:51:57.993656 polkitd[1621]: Started polkitd version 126 May 15 12:51:58.000543 polkitd[1621]: Loading rules from directory /etc/polkit-1/rules.d May 15 12:51:58.001961 polkitd[1621]: Loading rules from directory /run/polkit-1/rules.d May 15 12:51:58.002393 polkitd[1621]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 15 12:51:58.002671 polkitd[1621]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 15 12:51:58.003599 polkitd[1621]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 15 12:51:58.003637 polkitd[1621]: Loading rules from directory /usr/share/polkit-1/rules.d May 15 12:51:58.005768 polkitd[1621]: Finished loading, compiling and executing 2 rules May 15 12:51:58.008727 systemd[1]: Started polkit.service - Authorization Manager. May 15 12:51:58.009413 dbus-daemon[1518]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 15 12:51:58.010634 polkitd[1621]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 15 12:51:58.011497 sshd_keygen[1560]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 12:51:58.048112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 12:51:58.050838 systemd-hostnamed[1575]: Hostname set to <172-236-126-108> (transient) May 15 12:51:58.051237 systemd-resolved[1409]: System hostname changed to '172-236-126-108'. May 15 12:51:58.053655 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 12:51:58.056108 coreos-metadata[1605]: May 15 12:51:58.056 INFO Fetch successful May 15 12:51:58.058945 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 12:51:58.079230 update-ssh-keys[1647]: Updated "/home/core/.ssh/authorized_keys" May 15 12:51:58.081391 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 12:51:58.083973 systemd[1]: Finished sshkeys.service. May 15 12:51:58.085425 systemd[1]: issuegen.service: Deactivated successfully. May 15 12:51:58.085832 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 12:51:58.091661 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 12:51:58.108621 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 12:51:58.111901 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 12:51:58.115841 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 12:51:58.116481 systemd[1]: Reached target getty.target - Login Prompts. May 15 12:51:58.258375 tar[1537]: linux-amd64/README.md May 15 12:51:58.276993 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 12:51:58.277370 coreos-metadata[1516]: May 15 12:51:58.277 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 15 12:51:58.367308 coreos-metadata[1516]: May 15 12:51:58.367 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 15 12:51:58.400781 systemd-networkd[1458]: eth0: Gained IPv6LL May 15 12:51:58.403386 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 12:51:58.404489 systemd[1]: Reached target network-online.target - Network is Online. May 15 12:51:58.407432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:51:58.409873 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 12:51:58.434448 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 12:51:58.568682 coreos-metadata[1516]: May 15 12:51:58.568 INFO Fetch successful May 15 12:51:58.568682 coreos-metadata[1516]: May 15 12:51:58.568 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 15 12:51:58.958042 coreos-metadata[1516]: May 15 12:51:58.957 INFO Fetch successful May 15 12:51:59.072831 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 12:51:59.073853 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 12:51:59.261667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:51:59.262668 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 12:51:59.298830 systemd[1]: Startup finished in 2.793s (kernel) + 7.917s (initrd) + 4.921s (userspace) = 15.633s. May 15 12:51:59.304987 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:51:59.810456 kubelet[1699]: E0515 12:51:59.810397 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:51:59.813759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:51:59.813942 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:51:59.814288 systemd[1]: kubelet.service: Consumed 852ms CPU time, 250.7M memory peak. May 15 12:52:00.384719 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 12:52:00.386432 systemd[1]: Started sshd@0-172.236.126.108:22-139.178.89.65:42340.service - OpenSSH per-connection server daemon (139.178.89.65:42340). May 15 12:52:00.734280 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 42340 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:52:00.736008 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:52:00.742344 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 12:52:00.743918 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 12:52:00.751134 systemd-logind[1531]: New session 1 of user core. May 15 12:52:00.764608 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 12:52:00.767899 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 12:52:00.779951 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 12:52:00.782344 systemd-logind[1531]: New session c1 of user core. May 15 12:52:00.917242 systemd[1715]: Queued start job for default target default.target. May 15 12:52:00.923886 systemd[1715]: Created slice app.slice - User Application Slice. May 15 12:52:00.923916 systemd[1715]: Reached target paths.target - Paths. May 15 12:52:00.923967 systemd[1715]: Reached target timers.target - Timers. May 15 12:52:00.925382 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 12:52:00.935872 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 12:52:00.935947 systemd[1715]: Reached target sockets.target - Sockets. May 15 12:52:00.935987 systemd[1715]: Reached target basic.target - Basic System. May 15 12:52:00.936027 systemd[1715]: Reached target default.target - Main User Target. May 15 12:52:00.936060 systemd[1715]: Startup finished in 147ms. May 15 12:52:00.936292 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 12:52:00.953716 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 12:52:01.218639 systemd[1]: Started sshd@1-172.236.126.108:22-139.178.89.65:42342.service - OpenSSH per-connection server daemon (139.178.89.65:42342). May 15 12:52:01.569149 sshd[1726]: Accepted publickey for core from 139.178.89.65 port 42342 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:52:01.571189 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:52:01.576528 systemd-logind[1531]: New session 2 of user core. May 15 12:52:01.581672 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 12:52:01.815121 sshd[1728]: Connection closed by 139.178.89.65 port 42342 May 15 12:52:01.815693 sshd-session[1726]: pam_unix(sshd:session): session closed for user core May 15 12:52:01.820530 systemd[1]: sshd@1-172.236.126.108:22-139.178.89.65:42342.service: Deactivated successfully. May 15 12:52:01.822448 systemd[1]: session-2.scope: Deactivated successfully. May 15 12:52:01.823366 systemd-logind[1531]: Session 2 logged out. Waiting for processes to exit. May 15 12:52:01.825285 systemd-logind[1531]: Removed session 2. May 15 12:52:01.874786 systemd[1]: Started sshd@2-172.236.126.108:22-139.178.89.65:42358.service - OpenSSH per-connection server daemon (139.178.89.65:42358). May 15 12:52:02.204762 sshd[1734]: Accepted publickey for core from 139.178.89.65 port 42358 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:52:02.206190 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:52:02.211392 systemd-logind[1531]: New session 3 of user core. May 15 12:52:02.218712 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 12:52:02.440815 sshd[1736]: Connection closed by 139.178.89.65 port 42358 May 15 12:52:02.441603 sshd-session[1734]: pam_unix(sshd:session): session closed for user core May 15 12:52:02.445107 systemd-logind[1531]: Session 3 logged out. Waiting for processes to exit. May 15 12:52:02.445795 systemd[1]: sshd@2-172.236.126.108:22-139.178.89.65:42358.service: Deactivated successfully. May 15 12:52:02.447526 systemd[1]: session-3.scope: Deactivated successfully. May 15 12:52:02.449128 systemd-logind[1531]: Removed session 3. May 15 12:52:02.501969 systemd[1]: Started sshd@3-172.236.126.108:22-139.178.89.65:42362.service - OpenSSH per-connection server daemon (139.178.89.65:42362). May 15 12:52:02.855856 sshd[1742]: Accepted publickey for core from 139.178.89.65 port 42362 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:52:02.857795 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:52:02.866327 systemd-logind[1531]: New session 4 of user core. May 15 12:52:02.872735 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 12:52:03.101664 sshd[1744]: Connection closed by 139.178.89.65 port 42362 May 15 12:52:03.102384 sshd-session[1742]: pam_unix(sshd:session): session closed for user core May 15 12:52:03.107447 systemd[1]: sshd@3-172.236.126.108:22-139.178.89.65:42362.service: Deactivated successfully. May 15 12:52:03.110341 systemd[1]: session-4.scope: Deactivated successfully. May 15 12:52:03.112832 systemd-logind[1531]: Session 4 logged out. Waiting for processes to exit. May 15 12:52:03.114352 systemd-logind[1531]: Removed session 4. May 15 12:52:03.162751 systemd[1]: Started sshd@4-172.236.126.108:22-139.178.89.65:42378.service - OpenSSH per-connection server daemon (139.178.89.65:42378). May 15 12:52:03.504936 sshd[1750]: Accepted publickey for core from 139.178.89.65 port 42378 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:52:03.506621 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:52:03.511838 systemd-logind[1531]: New session 5 of user core. May 15 12:52:03.517680 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 12:52:03.707434 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 12:52:03.707777 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:52:03.722516 sudo[1753]: pam_unix(sudo:session): session closed for user root May 15 12:52:03.772610 sshd[1752]: Connection closed by 139.178.89.65 port 42378 May 15 12:52:03.773185 sshd-session[1750]: pam_unix(sshd:session): session closed for user core May 15 12:52:03.776947 systemd[1]: sshd@4-172.236.126.108:22-139.178.89.65:42378.service: Deactivated successfully. May 15 12:52:03.778501 systemd[1]: session-5.scope: Deactivated successfully. May 15 12:52:03.780321 systemd-logind[1531]: Session 5 logged out. Waiting for processes to exit. May 15 12:52:03.781303 systemd-logind[1531]: Removed session 5. May 15 12:52:03.843225 systemd[1]: Started sshd@5-172.236.126.108:22-139.178.89.65:42386.service - OpenSSH per-connection server daemon (139.178.89.65:42386). May 15 12:52:04.173726 sshd[1759]: Accepted publickey for core from 139.178.89.65 port 42386 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:52:04.175534 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:52:04.180549 systemd-logind[1531]: New session 6 of user core. May 15 12:52:04.185707 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 12:52:04.368057 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 12:52:04.368371 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:52:04.375717 sudo[1763]: pam_unix(sudo:session): session closed for user root May 15 12:52:04.381824 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 12:52:04.382279 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:52:04.392519 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 12:52:04.439154 augenrules[1785]: No rules May 15 12:52:04.440649 systemd[1]: audit-rules.service: Deactivated successfully. May 15 12:52:04.440932 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 12:52:04.442070 sudo[1762]: pam_unix(sudo:session): session closed for user root May 15 12:52:04.492406 sshd[1761]: Connection closed by 139.178.89.65 port 42386 May 15 12:52:04.493198 sshd-session[1759]: pam_unix(sshd:session): session closed for user core May 15 12:52:04.498329 systemd[1]: sshd@5-172.236.126.108:22-139.178.89.65:42386.service: Deactivated successfully. May 15 12:52:04.499984 systemd[1]: session-6.scope: Deactivated successfully. May 15 12:52:04.500890 systemd-logind[1531]: Session 6 logged out. Waiting for processes to exit. May 15 12:52:04.502130 systemd-logind[1531]: Removed session 6. May 15 12:52:04.560838 systemd[1]: Started sshd@6-172.236.126.108:22-139.178.89.65:42396.service - OpenSSH per-connection server daemon (139.178.89.65:42396). May 15 12:52:04.907328 sshd[1794]: Accepted publickey for core from 139.178.89.65 port 42396 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:52:04.908696 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:52:04.913791 systemd-logind[1531]: New session 7 of user core. May 15 12:52:04.918702 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 12:52:05.108628 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 12:52:05.108930 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 12:52:05.391848 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 12:52:05.401853 (dockerd)[1815]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 12:52:05.595960 dockerd[1815]: time="2025-05-15T12:52:05.595900833Z" level=info msg="Starting up" May 15 12:52:05.597545 dockerd[1815]: time="2025-05-15T12:52:05.597522943Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 12:52:05.654020 dockerd[1815]: time="2025-05-15T12:52:05.653923313Z" level=info msg="Loading containers: start." May 15 12:52:05.664618 kernel: Initializing XFRM netlink socket May 15 12:52:05.897038 systemd-networkd[1458]: docker0: Link UP May 15 12:52:05.899671 dockerd[1815]: time="2025-05-15T12:52:05.899624433Z" level=info msg="Loading containers: done." May 15 12:52:05.912322 dockerd[1815]: time="2025-05-15T12:52:05.912222323Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 12:52:05.912322 dockerd[1815]: time="2025-05-15T12:52:05.912301193Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 12:52:05.912467 dockerd[1815]: time="2025-05-15T12:52:05.912399213Z" level=info msg="Initializing buildkit" May 15 12:52:05.934991 dockerd[1815]: time="2025-05-15T12:52:05.934940883Z" level=info msg="Completed buildkit initialization" May 15 12:52:05.941293 dockerd[1815]: time="2025-05-15T12:52:05.941250633Z" level=info msg="Daemon has completed initialization" May 15 12:52:05.941499 dockerd[1815]: time="2025-05-15T12:52:05.941377643Z" level=info msg="API listen on /run/docker.sock" May 15 12:52:05.941453 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 12:52:06.505966 containerd[1555]: time="2025-05-15T12:52:06.505928943Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 12:52:07.233945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2006477388.mount: Deactivated successfully. May 15 12:52:08.329363 containerd[1555]: time="2025-05-15T12:52:08.329287223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:08.330180 containerd[1555]: time="2025-05-15T12:52:08.330120613Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 15 12:52:08.331583 containerd[1555]: time="2025-05-15T12:52:08.330590403Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:08.332707 containerd[1555]: time="2025-05-15T12:52:08.332671663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:08.333713 containerd[1555]: time="2025-05-15T12:52:08.333457453Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.82749302s" May 15 12:52:08.333713 containerd[1555]: time="2025-05-15T12:52:08.333487423Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 15 12:52:08.334468 containerd[1555]: time="2025-05-15T12:52:08.334439603Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 12:52:09.754174 containerd[1555]: time="2025-05-15T12:52:09.753409993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:09.754174 containerd[1555]: time="2025-05-15T12:52:09.754136153Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 15 12:52:09.754687 containerd[1555]: time="2025-05-15T12:52:09.754661543Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:09.756131 containerd[1555]: time="2025-05-15T12:52:09.756111723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:09.756945 containerd[1555]: time="2025-05-15T12:52:09.756909013Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.42243876s" May 15 12:52:09.757025 containerd[1555]: time="2025-05-15T12:52:09.757010483Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 15 12:52:09.757808 containerd[1555]: time="2025-05-15T12:52:09.757782223Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 12:52:10.064603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 12:52:10.066722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:52:10.230216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:52:10.242856 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 12:52:10.283974 kubelet[2081]: E0515 12:52:10.283898 2081 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 12:52:10.289082 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 12:52:10.289262 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 12:52:10.289666 systemd[1]: kubelet.service: Consumed 176ms CPU time, 104.4M memory peak. May 15 12:52:10.991868 containerd[1555]: time="2025-05-15T12:52:10.991796613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:10.992726 containerd[1555]: time="2025-05-15T12:52:10.992693623Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 15 12:52:10.993501 containerd[1555]: time="2025-05-15T12:52:10.993476103Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:10.995438 containerd[1555]: time="2025-05-15T12:52:10.995400453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:10.996050 containerd[1555]: time="2025-05-15T12:52:10.996022473Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.23821136s" May 15 12:52:10.996092 containerd[1555]: time="2025-05-15T12:52:10.996050533Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 15 12:52:10.996704 containerd[1555]: time="2025-05-15T12:52:10.996684623Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 12:52:12.155054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884255396.mount: Deactivated successfully. May 15 12:52:12.486331 containerd[1555]: time="2025-05-15T12:52:12.486215973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:12.487193 containerd[1555]: time="2025-05-15T12:52:12.487081283Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 15 12:52:12.487686 containerd[1555]: time="2025-05-15T12:52:12.487645423Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:12.489028 containerd[1555]: time="2025-05-15T12:52:12.488999103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:12.489588 containerd[1555]: time="2025-05-15T12:52:12.489485513Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.49277573s" May 15 12:52:12.489588 containerd[1555]: time="2025-05-15T12:52:12.489517073Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 15 12:52:12.490453 containerd[1555]: time="2025-05-15T12:52:12.490365893Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 12:52:13.247257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621869282.mount: Deactivated successfully. May 15 12:52:13.984741 containerd[1555]: time="2025-05-15T12:52:13.984641653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:13.985821 containerd[1555]: time="2025-05-15T12:52:13.985659163Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 15 12:52:13.986323 containerd[1555]: time="2025-05-15T12:52:13.986285313Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:13.989897 containerd[1555]: time="2025-05-15T12:52:13.988767303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:13.989897 containerd[1555]: time="2025-05-15T12:52:13.989751103Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.49935647s" May 15 12:52:13.989897 containerd[1555]: time="2025-05-15T12:52:13.989789963Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 12:52:13.990953 containerd[1555]: time="2025-05-15T12:52:13.990908243Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 12:52:14.546278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1179423559.mount: Deactivated successfully. May 15 12:52:14.551892 containerd[1555]: time="2025-05-15T12:52:14.551833403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:52:14.552525 containerd[1555]: time="2025-05-15T12:52:14.552492493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 12:52:14.553593 containerd[1555]: time="2025-05-15T12:52:14.552984453Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:52:14.554538 containerd[1555]: time="2025-05-15T12:52:14.554493223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 12:52:14.555593 containerd[1555]: time="2025-05-15T12:52:14.555122663Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 564.17981ms" May 15 12:52:14.555593 containerd[1555]: time="2025-05-15T12:52:14.555165623Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 12:52:14.556018 containerd[1555]: time="2025-05-15T12:52:14.555986793Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 12:52:15.240221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1166746813.mount: Deactivated successfully. May 15 12:52:16.728685 containerd[1555]: time="2025-05-15T12:52:16.728607653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:16.729720 containerd[1555]: time="2025-05-15T12:52:16.729681963Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 15 12:52:16.730595 containerd[1555]: time="2025-05-15T12:52:16.730506023Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:16.733057 containerd[1555]: time="2025-05-15T12:52:16.733010573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:16.736455 containerd[1555]: time="2025-05-15T12:52:16.734050563Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.17803508s" May 15 12:52:16.736455 containerd[1555]: time="2025-05-15T12:52:16.734085983Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 12:52:19.266617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:52:19.266795 systemd[1]: kubelet.service: Consumed 176ms CPU time, 104.4M memory peak. May 15 12:52:19.269388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:52:19.302641 systemd[1]: Reload requested from client PID 2238 ('systemctl') (unit session-7.scope)... May 15 12:52:19.302779 systemd[1]: Reloading... May 15 12:52:19.424609 zram_generator::config[2282]: No configuration found. May 15 12:52:19.525428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:52:19.627302 systemd[1]: Reloading finished in 324 ms. May 15 12:52:19.690076 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 12:52:19.690177 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 12:52:19.690473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:52:19.690541 systemd[1]: kubelet.service: Consumed 126ms CPU time, 91.8M memory peak. May 15 12:52:19.692122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:52:19.853659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:52:19.865914 (kubelet)[2336]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:52:19.903013 kubelet[2336]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:52:19.903013 kubelet[2336]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 12:52:19.903013 kubelet[2336]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:52:19.903375 kubelet[2336]: I0515 12:52:19.903051 2336 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:52:20.459727 kubelet[2336]: I0515 12:52:20.459654 2336 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 12:52:20.459727 kubelet[2336]: I0515 12:52:20.459705 2336 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:52:20.460056 kubelet[2336]: I0515 12:52:20.460025 2336 server.go:954] "Client rotation is on, will bootstrap in background" May 15 12:52:20.489514 kubelet[2336]: E0515 12:52:20.489468 2336 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.236.126.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.126.108:6443: connect: connection refused" logger="UnhandledError" May 15 12:52:20.490572 kubelet[2336]: I0515 12:52:20.490414 2336 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:52:20.502712 kubelet[2336]: I0515 12:52:20.502675 2336 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 12:52:20.506236 kubelet[2336]: I0515 12:52:20.506213 2336 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:52:20.507753 kubelet[2336]: I0515 12:52:20.507686 2336 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:52:20.507947 kubelet[2336]: I0515 12:52:20.507726 2336 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-126-108","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 12:52:20.508105 kubelet[2336]: I0515 12:52:20.507950 2336 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:52:20.508105 kubelet[2336]: I0515 12:52:20.507965 2336 container_manager_linux.go:304] "Creating device plugin manager" May 15 12:52:20.508179 kubelet[2336]: I0515 12:52:20.508106 2336 state_mem.go:36] "Initialized new in-memory state store" May 15 12:52:20.512134 kubelet[2336]: I0515 12:52:20.512107 2336 kubelet.go:446] "Attempting to sync node with API server" May 15 12:52:20.512202 kubelet[2336]: I0515 12:52:20.512137 2336 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:52:20.512202 kubelet[2336]: I0515 12:52:20.512175 2336 kubelet.go:352] "Adding apiserver pod source" May 15 12:52:20.512202 kubelet[2336]: I0515 12:52:20.512191 2336 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:52:20.517893 kubelet[2336]: W0515 12:52:20.517291 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.236.126.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.236.126.108:6443: connect: connection refused May 15 12:52:20.517893 kubelet[2336]: E0515 12:52:20.517382 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.236.126.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.126.108:6443: connect: connection refused" logger="UnhandledError" May 15 12:52:20.517893 kubelet[2336]: W0515 12:52:20.517478 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.236.126.108:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-126-108&limit=500&resourceVersion=0": dial tcp 172.236.126.108:6443: connect: connection refused May 15 12:52:20.517893 kubelet[2336]: E0515 12:52:20.517517 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.236.126.108:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-126-108&limit=500&resourceVersion=0\": dial tcp 172.236.126.108:6443: connect: connection refused" logger="UnhandledError" May 15 12:52:20.518244 kubelet[2336]: I0515 12:52:20.518225 2336 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:52:20.518768 kubelet[2336]: I0515 12:52:20.518751 2336 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:52:20.519497 kubelet[2336]: W0515 12:52:20.519480 2336 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 12:52:20.524728 kubelet[2336]: I0515 12:52:20.524711 2336 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 12:52:20.524837 kubelet[2336]: I0515 12:52:20.524825 2336 server.go:1287] "Started kubelet" May 15 12:52:20.534792 kubelet[2336]: I0515 12:52:20.534744 2336 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:52:20.536601 kubelet[2336]: I0515 12:52:20.535674 2336 server.go:490] "Adding debug handlers to kubelet server" May 15 12:52:20.536601 kubelet[2336]: I0515 12:52:20.535941 2336 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:52:20.536601 kubelet[2336]: I0515 12:52:20.536366 2336 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:52:20.539441 kubelet[2336]: I0515 12:52:20.538051 2336 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:52:20.541961 kubelet[2336]: E0515 12:52:20.536726 2336 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.126.108:6443/api/v1/namespaces/default/events\": dial tcp 172.236.126.108:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-126-108.183fb4684e35f33f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-126-108,UID:172-236-126-108,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-126-108,},FirstTimestamp:2025-05-15 12:52:20.524798783 +0000 UTC m=+0.654404981,LastTimestamp:2025-05-15 12:52:20.524798783 +0000 UTC m=+0.654404981,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-126-108,}" May 15 12:52:20.546604 kubelet[2336]: I0515 12:52:20.543545 2336 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 12:52:20.546604 kubelet[2336]: E0515 12:52:20.545351 2336 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-236-126-108\" not found" May 15 12:52:20.546604 kubelet[2336]: I0515 12:52:20.545376 2336 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 12:52:20.546604 kubelet[2336]: I0515 12:52:20.545580 2336 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 12:52:20.546604 kubelet[2336]: I0515 12:52:20.545622 2336 reconciler.go:26] "Reconciler: start to sync state" May 15 12:52:20.546604 kubelet[2336]: W0515 12:52:20.545988 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.236.126.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.126.108:6443: connect: connection refused May 15 12:52:20.546604 kubelet[2336]: E0515 12:52:20.546048 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.236.126.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.126.108:6443: connect: connection refused" logger="UnhandledError" May 15 12:52:20.546604 kubelet[2336]: E0515 12:52:20.546240 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.126.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-126-108?timeout=10s\": dial tcp 172.236.126.108:6443: connect: connection refused" interval="200ms" May 15 12:52:20.550577 kubelet[2336]: I0515 12:52:20.549911 2336 factory.go:221] Registration of the containerd container factory successfully May 15 12:52:20.550577 kubelet[2336]: I0515 12:52:20.549932 2336 factory.go:221] Registration of the systemd container factory successfully May 15 12:52:20.550577 kubelet[2336]: I0515 12:52:20.550006 2336 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:52:20.554284 kubelet[2336]: E0515 12:52:20.554241 2336 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 12:52:20.575343 kubelet[2336]: I0515 12:52:20.575306 2336 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 12:52:20.575343 kubelet[2336]: I0515 12:52:20.575327 2336 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 12:52:20.575343 kubelet[2336]: I0515 12:52:20.575344 2336 state_mem.go:36] "Initialized new in-memory state store" May 15 12:52:20.576161 kubelet[2336]: I0515 12:52:20.576133 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:52:20.577401 kubelet[2336]: I0515 12:52:20.577378 2336 policy_none.go:49] "None policy: Start" May 15 12:52:20.577401 kubelet[2336]: I0515 12:52:20.577400 2336 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 12:52:20.577462 kubelet[2336]: I0515 12:52:20.577412 2336 state_mem.go:35] "Initializing new in-memory state store" May 15 12:52:20.577715 kubelet[2336]: I0515 12:52:20.577689 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:52:20.577715 kubelet[2336]: I0515 12:52:20.577711 2336 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 12:52:20.577778 kubelet[2336]: I0515 12:52:20.577732 2336 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 12:52:20.577778 kubelet[2336]: I0515 12:52:20.577739 2336 kubelet.go:2388] "Starting kubelet main sync loop" May 15 12:52:20.577831 kubelet[2336]: E0515 12:52:20.577809 2336 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:52:20.581878 kubelet[2336]: W0515 12:52:20.581750 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.236.126.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.126.108:6443: connect: connection refused May 15 12:52:20.581878 kubelet[2336]: E0515 12:52:20.581787 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.236.126.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.126.108:6443: connect: connection refused" logger="UnhandledError" May 15 12:52:20.586875 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 12:52:20.598943 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 12:52:20.602494 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 12:52:20.612750 kubelet[2336]: I0515 12:52:20.612735 2336 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:52:20.613094 kubelet[2336]: I0515 12:52:20.613083 2336 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 12:52:20.613271 kubelet[2336]: I0515 12:52:20.613222 2336 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:52:20.613874 kubelet[2336]: I0515 12:52:20.613574 2336 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:52:20.614540 kubelet[2336]: E0515 12:52:20.614523 2336 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 12:52:20.614636 kubelet[2336]: E0515 12:52:20.614625 2336 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-126-108\" not found" May 15 12:52:20.690290 systemd[1]: Created slice kubepods-burstable-pod63de536d0537cbffed03f61672894755.slice - libcontainer container kubepods-burstable-pod63de536d0537cbffed03f61672894755.slice. May 15 12:52:20.699294 kubelet[2336]: E0515 12:52:20.699256 2336 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-126-108\" not found" node="172-236-126-108" May 15 12:52:20.702330 systemd[1]: Created slice kubepods-burstable-pod4f1f6f64e5aeb9d2658680fc6801b2a9.slice - libcontainer container kubepods-burstable-pod4f1f6f64e5aeb9d2658680fc6801b2a9.slice. May 15 12:52:20.710501 kubelet[2336]: E0515 12:52:20.709739 2336 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-126-108\" not found" node="172-236-126-108" May 15 12:52:20.712606 systemd[1]: Created slice kubepods-burstable-pod6e965577c7295468b10d321923f45c8f.slice - libcontainer container kubepods-burstable-pod6e965577c7295468b10d321923f45c8f.slice. May 15 12:52:20.714424 kubelet[2336]: I0515 12:52:20.714397 2336 kubelet_node_status.go:76] "Attempting to register node" node="172-236-126-108" May 15 12:52:20.714734 kubelet[2336]: E0515 12:52:20.714663 2336 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-126-108\" not found" node="172-236-126-108" May 15 12:52:20.714858 kubelet[2336]: E0515 12:52:20.714776 2336 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.236.126.108:6443/api/v1/nodes\": dial tcp 172.236.126.108:6443: connect: connection refused" node="172-236-126-108" May 15 12:52:20.747453 kubelet[2336]: I0515 12:52:20.747223 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63de536d0537cbffed03f61672894755-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-126-108\" (UID: \"63de536d0537cbffed03f61672894755\") " pod="kube-system/kube-apiserver-172-236-126-108" May 15 12:52:20.747453 kubelet[2336]: I0515 12:52:20.747254 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f1f6f64e5aeb9d2658680fc6801b2a9-flexvolume-dir\") pod \"kube-controller-manager-172-236-126-108\" (UID: \"4f1f6f64e5aeb9d2658680fc6801b2a9\") " pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:20.747453 kubelet[2336]: I0515 12:52:20.747280 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f1f6f64e5aeb9d2658680fc6801b2a9-kubeconfig\") pod \"kube-controller-manager-172-236-126-108\" (UID: \"4f1f6f64e5aeb9d2658680fc6801b2a9\") " pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:20.747453 kubelet[2336]: I0515 12:52:20.747296 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e965577c7295468b10d321923f45c8f-kubeconfig\") pod \"kube-scheduler-172-236-126-108\" (UID: \"6e965577c7295468b10d321923f45c8f\") " pod="kube-system/kube-scheduler-172-236-126-108" May 15 12:52:20.747453 kubelet[2336]: I0515 12:52:20.747310 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63de536d0537cbffed03f61672894755-ca-certs\") pod \"kube-apiserver-172-236-126-108\" (UID: \"63de536d0537cbffed03f61672894755\") " pod="kube-system/kube-apiserver-172-236-126-108" May 15 12:52:20.747741 kubelet[2336]: I0515 12:52:20.747326 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f1f6f64e5aeb9d2658680fc6801b2a9-ca-certs\") pod \"kube-controller-manager-172-236-126-108\" (UID: \"4f1f6f64e5aeb9d2658680fc6801b2a9\") " pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:20.747741 kubelet[2336]: I0515 12:52:20.747340 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f1f6f64e5aeb9d2658680fc6801b2a9-k8s-certs\") pod \"kube-controller-manager-172-236-126-108\" (UID: \"4f1f6f64e5aeb9d2658680fc6801b2a9\") " pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:20.747741 kubelet[2336]: I0515 12:52:20.747355 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f1f6f64e5aeb9d2658680fc6801b2a9-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-126-108\" (UID: \"4f1f6f64e5aeb9d2658680fc6801b2a9\") " pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:20.747741 kubelet[2336]: I0515 12:52:20.747369 2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63de536d0537cbffed03f61672894755-k8s-certs\") pod \"kube-apiserver-172-236-126-108\" (UID: \"63de536d0537cbffed03f61672894755\") " pod="kube-system/kube-apiserver-172-236-126-108" May 15 12:52:20.747741 kubelet[2336]: E0515 12:52:20.747430 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.126.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-126-108?timeout=10s\": dial tcp 172.236.126.108:6443: connect: connection refused" interval="400ms" May 15 12:52:20.917423 kubelet[2336]: I0515 12:52:20.917370 2336 kubelet_node_status.go:76] "Attempting to register node" node="172-236-126-108" May 15 12:52:20.918221 kubelet[2336]: E0515 12:52:20.918005 2336 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.236.126.108:6443/api/v1/nodes\": dial tcp 172.236.126.108:6443: connect: connection refused" node="172-236-126-108" May 15 12:52:21.000688 kubelet[2336]: E0515 12:52:21.000466 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:21.002021 containerd[1555]: time="2025-05-15T12:52:21.001621883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-126-108,Uid:63de536d0537cbffed03f61672894755,Namespace:kube-system,Attempt:0,}" May 15 12:52:21.011873 kubelet[2336]: E0515 12:52:21.011769 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:21.012710 containerd[1555]: time="2025-05-15T12:52:21.012482283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-126-108,Uid:4f1f6f64e5aeb9d2658680fc6801b2a9,Namespace:kube-system,Attempt:0,}" May 15 12:52:21.015832 kubelet[2336]: E0515 12:52:21.015813 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:21.023424 containerd[1555]: time="2025-05-15T12:52:21.019628313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-126-108,Uid:6e965577c7295468b10d321923f45c8f,Namespace:kube-system,Attempt:0,}" May 15 12:52:21.038461 containerd[1555]: time="2025-05-15T12:52:21.038421093Z" level=info msg="connecting to shim 6fb01c47dc64b56388229df24165d28e6c44c00299cc8bbc918ce7968b1f1eb3" address="unix:///run/containerd/s/761319dbca7c9f824c8d8c5f940a01b407eba6e5dbe6ccffffd971a00139f95d" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:21.076980 containerd[1555]: time="2025-05-15T12:52:21.076892633Z" level=info msg="connecting to shim cc13b50097af63b02730e52ad6ccc874f08fb5280eeaf76418fc2dd8c639d7cd" address="unix:///run/containerd/s/723faf6c1b93194723a857b234f6baf2fd6153dfdbfb9d69bc823d9ee701a127" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:21.125133 containerd[1555]: time="2025-05-15T12:52:21.118656383Z" level=info msg="connecting to shim a2da1a08c8e3212e5fcfb7ae7449d6cc8e9f0a437feaa60388c4b66bac165f00" address="unix:///run/containerd/s/b99cd4a54b7f1312b74116066b8c5a5a7cb06b1fe99c014f1aa27e63c9ff9601" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:21.136676 systemd[1]: Started cri-containerd-6fb01c47dc64b56388229df24165d28e6c44c00299cc8bbc918ce7968b1f1eb3.scope - libcontainer container 6fb01c47dc64b56388229df24165d28e6c44c00299cc8bbc918ce7968b1f1eb3. May 15 12:52:21.149945 kubelet[2336]: E0515 12:52:21.148283 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.126.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-126-108?timeout=10s\": dial tcp 172.236.126.108:6443: connect: connection refused" interval="800ms" May 15 12:52:21.193301 systemd[1]: Started cri-containerd-cc13b50097af63b02730e52ad6ccc874f08fb5280eeaf76418fc2dd8c639d7cd.scope - libcontainer container cc13b50097af63b02730e52ad6ccc874f08fb5280eeaf76418fc2dd8c639d7cd. May 15 12:52:21.203320 systemd[1]: Started cri-containerd-a2da1a08c8e3212e5fcfb7ae7449d6cc8e9f0a437feaa60388c4b66bac165f00.scope - libcontainer container a2da1a08c8e3212e5fcfb7ae7449d6cc8e9f0a437feaa60388c4b66bac165f00. May 15 12:52:21.242233 containerd[1555]: time="2025-05-15T12:52:21.242166703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-126-108,Uid:63de536d0537cbffed03f61672894755,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fb01c47dc64b56388229df24165d28e6c44c00299cc8bbc918ce7968b1f1eb3\"" May 15 12:52:21.245693 kubelet[2336]: E0515 12:52:21.245656 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:21.248525 containerd[1555]: time="2025-05-15T12:52:21.248494613Z" level=info msg="CreateContainer within sandbox \"6fb01c47dc64b56388229df24165d28e6c44c00299cc8bbc918ce7968b1f1eb3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 12:52:21.258758 containerd[1555]: time="2025-05-15T12:52:21.258670303Z" level=info msg="Container 487eb9b7d44be421c60e6be29df770563f25f4e6ef6f971bb899c92ecdb1b4b8: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:21.265495 containerd[1555]: time="2025-05-15T12:52:21.264001973Z" level=info msg="CreateContainer within sandbox \"6fb01c47dc64b56388229df24165d28e6c44c00299cc8bbc918ce7968b1f1eb3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"487eb9b7d44be421c60e6be29df770563f25f4e6ef6f971bb899c92ecdb1b4b8\"" May 15 12:52:21.265495 containerd[1555]: time="2025-05-15T12:52:21.264324803Z" level=info msg="StartContainer for \"487eb9b7d44be421c60e6be29df770563f25f4e6ef6f971bb899c92ecdb1b4b8\"" May 15 12:52:21.265495 containerd[1555]: time="2025-05-15T12:52:21.265185553Z" level=info msg="connecting to shim 487eb9b7d44be421c60e6be29df770563f25f4e6ef6f971bb899c92ecdb1b4b8" address="unix:///run/containerd/s/761319dbca7c9f824c8d8c5f940a01b407eba6e5dbe6ccffffd971a00139f95d" protocol=ttrpc version=3 May 15 12:52:21.302422 containerd[1555]: time="2025-05-15T12:52:21.302369433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-126-108,Uid:4f1f6f64e5aeb9d2658680fc6801b2a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc13b50097af63b02730e52ad6ccc874f08fb5280eeaf76418fc2dd8c639d7cd\"" May 15 12:52:21.304283 kubelet[2336]: E0515 12:52:21.304107 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:21.306852 containerd[1555]: time="2025-05-15T12:52:21.306794533Z" level=info msg="CreateContainer within sandbox \"cc13b50097af63b02730e52ad6ccc874f08fb5280eeaf76418fc2dd8c639d7cd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 12:52:21.323370 systemd[1]: Started cri-containerd-487eb9b7d44be421c60e6be29df770563f25f4e6ef6f971bb899c92ecdb1b4b8.scope - libcontainer container 487eb9b7d44be421c60e6be29df770563f25f4e6ef6f971bb899c92ecdb1b4b8. May 15 12:52:21.325658 containerd[1555]: time="2025-05-15T12:52:21.325537073Z" level=info msg="Container 3eb4081c698f2d4e871988f397984d1f79bb2f4f838cb308ee07011ac62a9468: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:21.334217 kubelet[2336]: I0515 12:52:21.333752 2336 kubelet_node_status.go:76] "Attempting to register node" node="172-236-126-108" May 15 12:52:21.334217 kubelet[2336]: E0515 12:52:21.334157 2336 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.236.126.108:6443/api/v1/nodes\": dial tcp 172.236.126.108:6443: connect: connection refused" node="172-236-126-108" May 15 12:52:21.337605 containerd[1555]: time="2025-05-15T12:52:21.337564623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-126-108,Uid:6e965577c7295468b10d321923f45c8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2da1a08c8e3212e5fcfb7ae7449d6cc8e9f0a437feaa60388c4b66bac165f00\"" May 15 12:52:21.339754 kubelet[2336]: E0515 12:52:21.339725 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:21.340200 containerd[1555]: time="2025-05-15T12:52:21.340150443Z" level=info msg="CreateContainer within sandbox \"cc13b50097af63b02730e52ad6ccc874f08fb5280eeaf76418fc2dd8c639d7cd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3eb4081c698f2d4e871988f397984d1f79bb2f4f838cb308ee07011ac62a9468\"" May 15 12:52:21.340521 containerd[1555]: time="2025-05-15T12:52:21.340480733Z" level=info msg="StartContainer for \"3eb4081c698f2d4e871988f397984d1f79bb2f4f838cb308ee07011ac62a9468\"" May 15 12:52:21.341462 containerd[1555]: time="2025-05-15T12:52:21.341387803Z" level=info msg="connecting to shim 3eb4081c698f2d4e871988f397984d1f79bb2f4f838cb308ee07011ac62a9468" address="unix:///run/containerd/s/723faf6c1b93194723a857b234f6baf2fd6153dfdbfb9d69bc823d9ee701a127" protocol=ttrpc version=3 May 15 12:52:21.345589 containerd[1555]: time="2025-05-15T12:52:21.345006383Z" level=info msg="CreateContainer within sandbox \"a2da1a08c8e3212e5fcfb7ae7449d6cc8e9f0a437feaa60388c4b66bac165f00\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 12:52:21.357708 kubelet[2336]: W0515 12:52:21.357459 2336 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.236.126.108:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-126-108&limit=500&resourceVersion=0": dial tcp 172.236.126.108:6443: connect: connection refused May 15 12:52:21.357851 kubelet[2336]: E0515 12:52:21.357802 2336 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.236.126.108:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-126-108&limit=500&resourceVersion=0\": dial tcp 172.236.126.108:6443: connect: connection refused" logger="UnhandledError" May 15 12:52:21.358513 containerd[1555]: time="2025-05-15T12:52:21.358492533Z" level=info msg="Container 5c4c0d898275a2dcde1b68527db6235ae40e99050b2797edda3b2d683186dba6: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:21.367869 containerd[1555]: time="2025-05-15T12:52:21.367839323Z" level=info msg="CreateContainer within sandbox \"a2da1a08c8e3212e5fcfb7ae7449d6cc8e9f0a437feaa60388c4b66bac165f00\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c4c0d898275a2dcde1b68527db6235ae40e99050b2797edda3b2d683186dba6\"" May 15 12:52:21.368433 containerd[1555]: time="2025-05-15T12:52:21.368390913Z" level=info msg="StartContainer for \"5c4c0d898275a2dcde1b68527db6235ae40e99050b2797edda3b2d683186dba6\"" May 15 12:52:21.369531 containerd[1555]: time="2025-05-15T12:52:21.369510813Z" level=info msg="connecting to shim 5c4c0d898275a2dcde1b68527db6235ae40e99050b2797edda3b2d683186dba6" address="unix:///run/containerd/s/b99cd4a54b7f1312b74116066b8c5a5a7cb06b1fe99c014f1aa27e63c9ff9601" protocol=ttrpc version=3 May 15 12:52:21.375664 systemd[1]: Started cri-containerd-3eb4081c698f2d4e871988f397984d1f79bb2f4f838cb308ee07011ac62a9468.scope - libcontainer container 3eb4081c698f2d4e871988f397984d1f79bb2f4f838cb308ee07011ac62a9468. May 15 12:52:21.404705 systemd[1]: Started cri-containerd-5c4c0d898275a2dcde1b68527db6235ae40e99050b2797edda3b2d683186dba6.scope - libcontainer container 5c4c0d898275a2dcde1b68527db6235ae40e99050b2797edda3b2d683186dba6. May 15 12:52:21.415616 containerd[1555]: time="2025-05-15T12:52:21.415539903Z" level=info msg="StartContainer for \"487eb9b7d44be421c60e6be29df770563f25f4e6ef6f971bb899c92ecdb1b4b8\" returns successfully" May 15 12:52:21.487064 containerd[1555]: time="2025-05-15T12:52:21.486988733Z" level=info msg="StartContainer for \"3eb4081c698f2d4e871988f397984d1f79bb2f4f838cb308ee07011ac62a9468\" returns successfully" May 15 12:52:21.568166 containerd[1555]: time="2025-05-15T12:52:21.568012423Z" level=info msg="StartContainer for \"5c4c0d898275a2dcde1b68527db6235ae40e99050b2797edda3b2d683186dba6\" returns successfully" May 15 12:52:21.596709 kubelet[2336]: E0515 12:52:21.596663 2336 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-126-108\" not found" node="172-236-126-108" May 15 12:52:21.596868 kubelet[2336]: E0515 12:52:21.596834 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:21.598588 kubelet[2336]: E0515 12:52:21.598531 2336 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-126-108\" not found" node="172-236-126-108" May 15 12:52:21.599061 kubelet[2336]: E0515 12:52:21.599026 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:21.600865 kubelet[2336]: E0515 12:52:21.600836 2336 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-126-108\" not found" node="172-236-126-108" May 15 12:52:21.601082 kubelet[2336]: E0515 12:52:21.601053 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:22.137080 kubelet[2336]: I0515 12:52:22.137027 2336 kubelet_node_status.go:76] "Attempting to register node" node="172-236-126-108" May 15 12:52:22.606069 kubelet[2336]: E0515 12:52:22.604780 2336 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-126-108\" not found" node="172-236-126-108" May 15 12:52:22.606069 kubelet[2336]: E0515 12:52:22.604923 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:22.606069 kubelet[2336]: E0515 12:52:22.605069 2336 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-126-108\" not found" node="172-236-126-108" May 15 12:52:22.606069 kubelet[2336]: E0515 12:52:22.605168 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:23.201306 kubelet[2336]: E0515 12:52:23.201225 2336 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-126-108\" not found" node="172-236-126-108" May 15 12:52:23.284907 kubelet[2336]: I0515 12:52:23.284688 2336 kubelet_node_status.go:79] "Successfully registered node" node="172-236-126-108" May 15 12:52:23.284907 kubelet[2336]: E0515 12:52:23.284732 2336 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172-236-126-108\": node \"172-236-126-108\" not found" May 15 12:52:23.317824 kubelet[2336]: E0515 12:52:23.317770 2336 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-236-126-108\" not found" May 15 12:52:23.418028 kubelet[2336]: E0515 12:52:23.417956 2336 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-236-126-108\" not found" May 15 12:52:23.519048 kubelet[2336]: E0515 12:52:23.518872 2336 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-236-126-108\" not found" May 15 12:52:23.619217 kubelet[2336]: E0515 12:52:23.619174 2336 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-236-126-108\" not found" May 15 12:52:23.719678 kubelet[2336]: E0515 12:52:23.719634 2336 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-236-126-108\" not found" May 15 12:52:23.820708 kubelet[2336]: E0515 12:52:23.820528 2336 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-236-126-108\" not found" May 15 12:52:23.921378 kubelet[2336]: E0515 12:52:23.921324 2336 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-236-126-108\" not found" May 15 12:52:24.022414 kubelet[2336]: E0515 12:52:24.022355 2336 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-236-126-108\" not found" May 15 12:52:24.025845 kubelet[2336]: E0515 12:52:24.025811 2336 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-126-108\" not found" node="172-236-126-108" May 15 12:52:24.025983 kubelet[2336]: E0515 12:52:24.025968 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:24.147181 kubelet[2336]: I0515 12:52:24.146581 2336 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-126-108" May 15 12:52:24.151169 kubelet[2336]: E0515 12:52:24.151139 2336 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-126-108\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-126-108" May 15 12:52:24.151414 kubelet[2336]: I0515 12:52:24.151279 2336 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:24.153098 kubelet[2336]: E0515 12:52:24.153074 2336 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-126-108\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:24.153098 kubelet[2336]: I0515 12:52:24.153094 2336 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-126-108" May 15 12:52:24.154252 kubelet[2336]: E0515 12:52:24.154231 2336 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-126-108\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-236-126-108" May 15 12:52:24.517309 kubelet[2336]: I0515 12:52:24.517099 2336 apiserver.go:52] "Watching apiserver" May 15 12:52:24.545797 kubelet[2336]: I0515 12:52:24.545757 2336 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 12:52:25.043389 kubelet[2336]: I0515 12:52:25.043359 2336 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:25.049160 kubelet[2336]: E0515 12:52:25.049115 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:25.310580 systemd[1]: Reload requested from client PID 2602 ('systemctl') (unit session-7.scope)... May 15 12:52:25.310601 systemd[1]: Reloading... May 15 12:52:25.458602 zram_generator::config[2644]: No configuration found. May 15 12:52:25.587390 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 12:52:25.610378 kubelet[2336]: E0515 12:52:25.610231 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:25.703481 systemd[1]: Reloading finished in 392 ms. May 15 12:52:25.730852 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:52:25.743900 systemd[1]: kubelet.service: Deactivated successfully. May 15 12:52:25.744249 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:52:25.744315 systemd[1]: kubelet.service: Consumed 1.061s CPU time, 125.6M memory peak. May 15 12:52:25.746445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 12:52:25.922599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 12:52:25.932093 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 12:52:25.976863 kubelet[2697]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:52:25.977226 kubelet[2697]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 12:52:25.977226 kubelet[2697]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 12:52:25.977226 kubelet[2697]: I0515 12:52:25.977069 2697 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 12:52:25.984974 kubelet[2697]: I0515 12:52:25.984935 2697 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 12:52:25.984974 kubelet[2697]: I0515 12:52:25.984961 2697 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 12:52:25.985185 kubelet[2697]: I0515 12:52:25.985156 2697 server.go:954] "Client rotation is on, will bootstrap in background" May 15 12:52:25.986206 kubelet[2697]: I0515 12:52:25.986174 2697 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 12:52:25.988199 kubelet[2697]: I0515 12:52:25.988165 2697 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 12:52:25.993195 kubelet[2697]: I0515 12:52:25.993148 2697 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 12:52:25.998569 kubelet[2697]: I0515 12:52:25.998497 2697 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 12:52:26.000605 kubelet[2697]: I0515 12:52:25.998728 2697 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 12:52:26.000605 kubelet[2697]: I0515 12:52:25.998771 2697 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-126-108","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 12:52:26.000605 kubelet[2697]: I0515 12:52:25.998987 2697 topology_manager.go:138] "Creating topology manager with none policy" May 15 12:52:26.000605 kubelet[2697]: I0515 12:52:25.998997 2697 container_manager_linux.go:304] "Creating device plugin manager" May 15 12:52:26.000780 kubelet[2697]: I0515 12:52:25.999040 2697 state_mem.go:36] "Initialized new in-memory state store" May 15 12:52:26.000780 kubelet[2697]: I0515 12:52:25.999179 2697 kubelet.go:446] "Attempting to sync node with API server" May 15 12:52:26.000780 kubelet[2697]: I0515 12:52:25.999190 2697 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 12:52:26.000780 kubelet[2697]: I0515 12:52:25.999220 2697 kubelet.go:352] "Adding apiserver pod source" May 15 12:52:26.000780 kubelet[2697]: I0515 12:52:25.999234 2697 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 12:52:26.001507 kubelet[2697]: I0515 12:52:26.001483 2697 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 12:52:26.002055 kubelet[2697]: I0515 12:52:26.002040 2697 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 12:52:26.002846 kubelet[2697]: I0515 12:52:26.002834 2697 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 12:52:26.002956 kubelet[2697]: I0515 12:52:26.002946 2697 server.go:1287] "Started kubelet" May 15 12:52:26.006330 kubelet[2697]: I0515 12:52:26.006287 2697 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 12:52:26.007306 kubelet[2697]: I0515 12:52:26.007266 2697 server.go:490] "Adding debug handlers to kubelet server" May 15 12:52:26.008604 kubelet[2697]: I0515 12:52:26.007676 2697 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 12:52:26.012775 kubelet[2697]: I0515 12:52:26.012751 2697 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 12:52:26.012866 kubelet[2697]: I0515 12:52:26.011154 2697 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 12:52:26.013502 kubelet[2697]: I0515 12:52:26.011029 2697 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 12:52:26.019734 kubelet[2697]: I0515 12:52:26.019719 2697 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 12:52:26.020043 kubelet[2697]: E0515 12:52:26.020025 2697 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-236-126-108\" not found" May 15 12:52:26.021511 kubelet[2697]: I0515 12:52:26.021494 2697 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 12:52:26.021803 kubelet[2697]: I0515 12:52:26.021788 2697 reconciler.go:26] "Reconciler: start to sync state" May 15 12:52:26.023676 kubelet[2697]: I0515 12:52:26.023649 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 12:52:26.024768 kubelet[2697]: I0515 12:52:26.024753 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 12:52:26.024846 kubelet[2697]: I0515 12:52:26.024836 2697 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 12:52:26.024903 kubelet[2697]: I0515 12:52:26.024894 2697 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 12:52:26.024946 kubelet[2697]: I0515 12:52:26.024939 2697 kubelet.go:2388] "Starting kubelet main sync loop" May 15 12:52:26.025061 kubelet[2697]: E0515 12:52:26.025031 2697 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 12:52:26.027841 kubelet[2697]: E0515 12:52:26.027539 2697 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 12:52:26.033214 kubelet[2697]: I0515 12:52:26.033176 2697 factory.go:221] Registration of the systemd container factory successfully May 15 12:52:26.033332 kubelet[2697]: I0515 12:52:26.033300 2697 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 12:52:26.037007 kubelet[2697]: I0515 12:52:26.036989 2697 factory.go:221] Registration of the containerd container factory successfully May 15 12:52:26.106631 kubelet[2697]: I0515 12:52:26.106542 2697 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 12:52:26.106631 kubelet[2697]: I0515 12:52:26.106603 2697 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 12:52:26.106631 kubelet[2697]: I0515 12:52:26.106622 2697 state_mem.go:36] "Initialized new in-memory state store" May 15 12:52:26.106835 kubelet[2697]: I0515 12:52:26.106801 2697 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 12:52:26.106835 kubelet[2697]: I0515 12:52:26.106812 2697 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 12:52:26.106835 kubelet[2697]: I0515 12:52:26.106832 2697 policy_none.go:49] "None policy: Start" May 15 12:52:26.106933 kubelet[2697]: I0515 12:52:26.106843 2697 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 12:52:26.106933 kubelet[2697]: I0515 12:52:26.106854 2697 state_mem.go:35] "Initializing new in-memory state store" May 15 12:52:26.106977 kubelet[2697]: I0515 12:52:26.106953 2697 state_mem.go:75] "Updated machine memory state" May 15 12:52:26.112421 kubelet[2697]: I0515 12:52:26.111853 2697 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 12:52:26.112421 kubelet[2697]: I0515 12:52:26.112054 2697 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 12:52:26.112421 kubelet[2697]: I0515 12:52:26.112065 2697 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 12:52:26.114144 kubelet[2697]: I0515 12:52:26.114130 2697 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 12:52:26.116030 kubelet[2697]: E0515 12:52:26.116013 2697 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 12:52:26.125959 kubelet[2697]: I0515 12:52:26.125922 2697 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-126-108" May 15 12:52:26.126214 kubelet[2697]: I0515 12:52:26.126196 2697 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:26.126917 kubelet[2697]: I0515 12:52:26.126903 2697 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-126-108" May 15 12:52:26.138859 kubelet[2697]: E0515 12:52:26.138692 2697 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-126-108\" already exists" pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:26.217616 kubelet[2697]: I0515 12:52:26.216132 2697 kubelet_node_status.go:76] "Attempting to register node" node="172-236-126-108" May 15 12:52:26.224583 kubelet[2697]: I0515 12:52:26.222918 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63de536d0537cbffed03f61672894755-ca-certs\") pod \"kube-apiserver-172-236-126-108\" (UID: \"63de536d0537cbffed03f61672894755\") " pod="kube-system/kube-apiserver-172-236-126-108" May 15 12:52:26.224583 kubelet[2697]: I0515 12:52:26.222959 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e965577c7295468b10d321923f45c8f-kubeconfig\") pod \"kube-scheduler-172-236-126-108\" (UID: \"6e965577c7295468b10d321923f45c8f\") " pod="kube-system/kube-scheduler-172-236-126-108" May 15 12:52:26.224583 kubelet[2697]: I0515 12:52:26.222995 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63de536d0537cbffed03f61672894755-k8s-certs\") pod \"kube-apiserver-172-236-126-108\" (UID: \"63de536d0537cbffed03f61672894755\") " pod="kube-system/kube-apiserver-172-236-126-108" May 15 12:52:26.224583 kubelet[2697]: I0515 12:52:26.223017 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63de536d0537cbffed03f61672894755-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-126-108\" (UID: \"63de536d0537cbffed03f61672894755\") " pod="kube-system/kube-apiserver-172-236-126-108" May 15 12:52:26.224583 kubelet[2697]: I0515 12:52:26.223038 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f1f6f64e5aeb9d2658680fc6801b2a9-ca-certs\") pod \"kube-controller-manager-172-236-126-108\" (UID: \"4f1f6f64e5aeb9d2658680fc6801b2a9\") " pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:26.224777 kubelet[2697]: I0515 12:52:26.223054 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f1f6f64e5aeb9d2658680fc6801b2a9-flexvolume-dir\") pod \"kube-controller-manager-172-236-126-108\" (UID: \"4f1f6f64e5aeb9d2658680fc6801b2a9\") " pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:26.224777 kubelet[2697]: I0515 12:52:26.223070 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f1f6f64e5aeb9d2658680fc6801b2a9-k8s-certs\") pod \"kube-controller-manager-172-236-126-108\" (UID: \"4f1f6f64e5aeb9d2658680fc6801b2a9\") " pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:26.224777 kubelet[2697]: I0515 12:52:26.223086 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f1f6f64e5aeb9d2658680fc6801b2a9-kubeconfig\") pod \"kube-controller-manager-172-236-126-108\" (UID: \"4f1f6f64e5aeb9d2658680fc6801b2a9\") " pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:26.224777 kubelet[2697]: I0515 12:52:26.223103 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f1f6f64e5aeb9d2658680fc6801b2a9-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-126-108\" (UID: \"4f1f6f64e5aeb9d2658680fc6801b2a9\") " pod="kube-system/kube-controller-manager-172-236-126-108" May 15 12:52:26.224777 kubelet[2697]: I0515 12:52:26.223300 2697 kubelet_node_status.go:125] "Node was previously registered" node="172-236-126-108" May 15 12:52:26.224777 kubelet[2697]: I0515 12:52:26.223358 2697 kubelet_node_status.go:79] "Successfully registered node" node="172-236-126-108" May 15 12:52:26.437922 kubelet[2697]: E0515 12:52:26.437845 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:26.438958 kubelet[2697]: E0515 12:52:26.438254 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:26.441100 kubelet[2697]: E0515 12:52:26.440573 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:27.008569 kubelet[2697]: I0515 12:52:27.008452 2697 apiserver.go:52] "Watching apiserver" May 15 12:52:27.077580 kubelet[2697]: I0515 12:52:27.077251 2697 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-126-108" May 15 12:52:27.077723 kubelet[2697]: E0515 12:52:27.077694 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:27.078583 kubelet[2697]: E0515 12:52:27.078541 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:27.121738 kubelet[2697]: E0515 12:52:27.121683 2697 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-126-108\" already exists" pod="kube-system/kube-apiserver-172-236-126-108" May 15 12:52:27.121942 kubelet[2697]: E0515 12:52:27.121914 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:27.122133 kubelet[2697]: I0515 12:52:27.122101 2697 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 12:52:27.160704 kubelet[2697]: I0515 12:52:27.160170 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-126-108" podStartSLOduration=1.160145253 podStartE2EDuration="1.160145253s" podCreationTimestamp="2025-05-15 12:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:52:27.133149213 +0000 UTC m=+1.194307631" watchObservedRunningTime="2025-05-15 12:52:27.160145253 +0000 UTC m=+1.221303671" May 15 12:52:27.184130 kubelet[2697]: I0515 12:52:27.184068 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-126-108" podStartSLOduration=2.184047293 podStartE2EDuration="2.184047293s" podCreationTimestamp="2025-05-15 12:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:52:27.167145763 +0000 UTC m=+1.228304181" watchObservedRunningTime="2025-05-15 12:52:27.184047293 +0000 UTC m=+1.245205721" May 15 12:52:27.197504 kubelet[2697]: I0515 12:52:27.197438 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-126-108" podStartSLOduration=1.197404893 podStartE2EDuration="1.197404893s" podCreationTimestamp="2025-05-15 12:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:52:27.186008683 +0000 UTC m=+1.247167101" watchObservedRunningTime="2025-05-15 12:52:27.197404893 +0000 UTC m=+1.258563311" May 15 12:52:28.079547 kubelet[2697]: E0515 12:52:28.079501 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:28.082643 kubelet[2697]: E0515 12:52:28.081006 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:28.087321 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 15 12:52:29.080582 kubelet[2697]: E0515 12:52:29.080523 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:29.263036 kubelet[2697]: E0515 12:52:29.262897 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:29.997341 kubelet[2697]: I0515 12:52:29.997258 2697 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 12:52:29.998195 containerd[1555]: time="2025-05-15T12:52:29.998075723Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 12:52:29.998993 kubelet[2697]: I0515 12:52:29.998767 2697 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 12:52:30.714150 systemd[1]: Created slice kubepods-besteffort-pod2f2361e7_c0cb_44b6_b3ab_f1716cf28b73.slice - libcontainer container kubepods-besteffort-pod2f2361e7_c0cb_44b6_b3ab_f1716cf28b73.slice. May 15 12:52:30.755997 kubelet[2697]: I0515 12:52:30.755952 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f2361e7-c0cb-44b6-b3ab-f1716cf28b73-kube-proxy\") pod \"kube-proxy-65s92\" (UID: \"2f2361e7-c0cb-44b6-b3ab-f1716cf28b73\") " pod="kube-system/kube-proxy-65s92" May 15 12:52:30.755997 kubelet[2697]: I0515 12:52:30.755992 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f2361e7-c0cb-44b6-b3ab-f1716cf28b73-xtables-lock\") pod \"kube-proxy-65s92\" (UID: \"2f2361e7-c0cb-44b6-b3ab-f1716cf28b73\") " pod="kube-system/kube-proxy-65s92" May 15 12:52:30.755997 kubelet[2697]: I0515 12:52:30.756015 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f2361e7-c0cb-44b6-b3ab-f1716cf28b73-lib-modules\") pod \"kube-proxy-65s92\" (UID: \"2f2361e7-c0cb-44b6-b3ab-f1716cf28b73\") " pod="kube-system/kube-proxy-65s92" May 15 12:52:30.756616 kubelet[2697]: I0515 12:52:30.756033 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jx7d\" (UniqueName: \"kubernetes.io/projected/2f2361e7-c0cb-44b6-b3ab-f1716cf28b73-kube-api-access-9jx7d\") pod \"kube-proxy-65s92\" (UID: \"2f2361e7-c0cb-44b6-b3ab-f1716cf28b73\") " pod="kube-system/kube-proxy-65s92" May 15 12:52:30.864576 kubelet[2697]: E0515 12:52:30.864526 2697 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 12:52:30.865006 kubelet[2697]: E0515 12:52:30.864794 2697 projected.go:194] Error preparing data for projected volume kube-api-access-9jx7d for pod kube-system/kube-proxy-65s92: configmap "kube-root-ca.crt" not found May 15 12:52:30.865006 kubelet[2697]: E0515 12:52:30.864880 2697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2f2361e7-c0cb-44b6-b3ab-f1716cf28b73-kube-api-access-9jx7d podName:2f2361e7-c0cb-44b6-b3ab-f1716cf28b73 nodeName:}" failed. No retries permitted until 2025-05-15 12:52:31.364835138 +0000 UTC m=+5.425993556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9jx7d" (UniqueName: "kubernetes.io/projected/2f2361e7-c0cb-44b6-b3ab-f1716cf28b73-kube-api-access-9jx7d") pod "kube-proxy-65s92" (UID: "2f2361e7-c0cb-44b6-b3ab-f1716cf28b73") : configmap "kube-root-ca.crt" not found May 15 12:52:30.959908 sudo[1797]: pam_unix(sudo:session): session closed for user root May 15 12:52:31.014569 sshd[1796]: Connection closed by 139.178.89.65 port 42396 May 15 12:52:31.015722 sshd-session[1794]: pam_unix(sshd:session): session closed for user core May 15 12:52:31.022755 systemd-logind[1531]: Session 7 logged out. Waiting for processes to exit. May 15 12:52:31.024607 systemd[1]: sshd@6-172.236.126.108:22-139.178.89.65:42396.service: Deactivated successfully. May 15 12:52:31.028090 systemd[1]: session-7.scope: Deactivated successfully. May 15 12:52:31.028350 systemd[1]: session-7.scope: Consumed 4.372s CPU time, 236.5M memory peak. May 15 12:52:31.031012 systemd-logind[1531]: Removed session 7. May 15 12:52:31.146370 kubelet[2697]: W0515 12:52:31.146297 2697 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-236-126-108" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node '172-236-126-108' and this object May 15 12:52:31.146648 kubelet[2697]: E0515 12:52:31.146347 2697 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-236-126-108\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-236-126-108' and this object" logger="UnhandledError" May 15 12:52:31.146648 kubelet[2697]: I0515 12:52:31.146613 2697 status_manager.go:890] "Failed to get status for pod" podUID="12f854ae-71a7-489e-8fcc-7a66ea5678f6" pod="tigera-operator/tigera-operator-789496d6f5-rd6lw" err="pods \"tigera-operator-789496d6f5-rd6lw\" is forbidden: User \"system:node:172-236-126-108\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-236-126-108' and this object" May 15 12:52:31.146800 kubelet[2697]: W0515 12:52:31.146786 2697 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:172-236-126-108" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node '172-236-126-108' and this object May 15 12:52:31.146911 kubelet[2697]: E0515 12:52:31.146881 2697 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:172-236-126-108\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-236-126-108' and this object" logger="UnhandledError" May 15 12:52:31.153186 systemd[1]: Created slice kubepods-besteffort-pod12f854ae_71a7_489e_8fcc_7a66ea5678f6.slice - libcontainer container kubepods-besteffort-pod12f854ae_71a7_489e_8fcc_7a66ea5678f6.slice. May 15 12:52:31.160623 kubelet[2697]: I0515 12:52:31.160582 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/12f854ae-71a7-489e-8fcc-7a66ea5678f6-var-lib-calico\") pod \"tigera-operator-789496d6f5-rd6lw\" (UID: \"12f854ae-71a7-489e-8fcc-7a66ea5678f6\") " pod="tigera-operator/tigera-operator-789496d6f5-rd6lw" May 15 12:52:31.160623 kubelet[2697]: I0515 12:52:31.160630 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qc52\" (UniqueName: \"kubernetes.io/projected/12f854ae-71a7-489e-8fcc-7a66ea5678f6-kube-api-access-4qc52\") pod \"tigera-operator-789496d6f5-rd6lw\" (UID: \"12f854ae-71a7-489e-8fcc-7a66ea5678f6\") " pod="tigera-operator/tigera-operator-789496d6f5-rd6lw" May 15 12:52:31.624737 kubelet[2697]: E0515 12:52:31.624689 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:31.625527 containerd[1555]: time="2025-05-15T12:52:31.625465176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-65s92,Uid:2f2361e7-c0cb-44b6-b3ab-f1716cf28b73,Namespace:kube-system,Attempt:0,}" May 15 12:52:31.645572 containerd[1555]: time="2025-05-15T12:52:31.645510603Z" level=info msg="connecting to shim 7a30eda49d529c26ce5bbab21a081e9c2a1201427d445228e7a949ee8ceef023" address="unix:///run/containerd/s/deb5d443d01b2dfad09ffc8597c737c2db1163f1786fa4936cc80c7948ce6615" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:31.675677 systemd[1]: Started cri-containerd-7a30eda49d529c26ce5bbab21a081e9c2a1201427d445228e7a949ee8ceef023.scope - libcontainer container 7a30eda49d529c26ce5bbab21a081e9c2a1201427d445228e7a949ee8ceef023. May 15 12:52:31.700973 containerd[1555]: time="2025-05-15T12:52:31.700931594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-65s92,Uid:2f2361e7-c0cb-44b6-b3ab-f1716cf28b73,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a30eda49d529c26ce5bbab21a081e9c2a1201427d445228e7a949ee8ceef023\"" May 15 12:52:31.701981 kubelet[2697]: E0515 12:52:31.701959 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:31.705333 containerd[1555]: time="2025-05-15T12:52:31.705302469Z" level=info msg="CreateContainer within sandbox \"7a30eda49d529c26ce5bbab21a081e9c2a1201427d445228e7a949ee8ceef023\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 12:52:31.716655 containerd[1555]: time="2025-05-15T12:52:31.715254576Z" level=info msg="Container aaf597b9d9a3e56c36504b4c1c637a4819aa348524016b6ad58b827462212ef5: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:31.723002 containerd[1555]: time="2025-05-15T12:52:31.722962850Z" level=info msg="CreateContainer within sandbox \"7a30eda49d529c26ce5bbab21a081e9c2a1201427d445228e7a949ee8ceef023\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aaf597b9d9a3e56c36504b4c1c637a4819aa348524016b6ad58b827462212ef5\"" May 15 12:52:31.723676 containerd[1555]: time="2025-05-15T12:52:31.723618260Z" level=info msg="StartContainer for \"aaf597b9d9a3e56c36504b4c1c637a4819aa348524016b6ad58b827462212ef5\"" May 15 12:52:31.725681 containerd[1555]: time="2025-05-15T12:52:31.725618829Z" level=info msg="connecting to shim aaf597b9d9a3e56c36504b4c1c637a4819aa348524016b6ad58b827462212ef5" address="unix:///run/containerd/s/deb5d443d01b2dfad09ffc8597c737c2db1163f1786fa4936cc80c7948ce6615" protocol=ttrpc version=3 May 15 12:52:31.748706 systemd[1]: Started cri-containerd-aaf597b9d9a3e56c36504b4c1c637a4819aa348524016b6ad58b827462212ef5.scope - libcontainer container aaf597b9d9a3e56c36504b4c1c637a4819aa348524016b6ad58b827462212ef5. May 15 12:52:31.796934 containerd[1555]: time="2025-05-15T12:52:31.796836114Z" level=info msg="StartContainer for \"aaf597b9d9a3e56c36504b4c1c637a4819aa348524016b6ad58b827462212ef5\" returns successfully" May 15 12:52:32.091810 kubelet[2697]: E0515 12:52:32.090820 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:32.116880 kubelet[2697]: I0515 12:52:32.116781 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-65s92" podStartSLOduration=2.116684194 podStartE2EDuration="2.116684194s" podCreationTimestamp="2025-05-15 12:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:52:32.103045145 +0000 UTC m=+6.164203573" watchObservedRunningTime="2025-05-15 12:52:32.116684194 +0000 UTC m=+6.177842612" May 15 12:52:32.267989 kubelet[2697]: E0515 12:52:32.267924 2697 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 15 12:52:32.267989 kubelet[2697]: E0515 12:52:32.267978 2697 projected.go:194] Error preparing data for projected volume kube-api-access-4qc52 for pod tigera-operator/tigera-operator-789496d6f5-rd6lw: failed to sync configmap cache: timed out waiting for the condition May 15 12:52:32.268211 kubelet[2697]: E0515 12:52:32.268061 2697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12f854ae-71a7-489e-8fcc-7a66ea5678f6-kube-api-access-4qc52 podName:12f854ae-71a7-489e-8fcc-7a66ea5678f6 nodeName:}" failed. No retries permitted until 2025-05-15 12:52:32.768037675 +0000 UTC m=+6.829196093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4qc52" (UniqueName: "kubernetes.io/projected/12f854ae-71a7-489e-8fcc-7a66ea5678f6-kube-api-access-4qc52") pod "tigera-operator-789496d6f5-rd6lw" (UID: "12f854ae-71a7-489e-8fcc-7a66ea5678f6") : failed to sync configmap cache: timed out waiting for the condition May 15 12:52:32.958649 containerd[1555]: time="2025-05-15T12:52:32.958515990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-rd6lw,Uid:12f854ae-71a7-489e-8fcc-7a66ea5678f6,Namespace:tigera-operator,Attempt:0,}" May 15 12:52:32.981126 containerd[1555]: time="2025-05-15T12:52:32.981069623Z" level=info msg="connecting to shim 079e6915c7372a0d8b23fad1f2d6a1e531607bab06b17c9ff50f5696e642edd9" address="unix:///run/containerd/s/8783adc6258dd24dd50b5af0160a3b0df5eb412802c016db76fae2249f2c9e43" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:33.005726 systemd[1]: Started cri-containerd-079e6915c7372a0d8b23fad1f2d6a1e531607bab06b17c9ff50f5696e642edd9.scope - libcontainer container 079e6915c7372a0d8b23fad1f2d6a1e531607bab06b17c9ff50f5696e642edd9. May 15 12:52:33.058065 containerd[1555]: time="2025-05-15T12:52:33.057935301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-rd6lw,Uid:12f854ae-71a7-489e-8fcc-7a66ea5678f6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"079e6915c7372a0d8b23fad1f2d6a1e531607bab06b17c9ff50f5696e642edd9\"" May 15 12:52:33.060922 containerd[1555]: time="2025-05-15T12:52:33.060887260Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 12:52:34.264518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount235853872.mount: Deactivated successfully. May 15 12:52:35.261985 kubelet[2697]: E0515 12:52:35.261955 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:35.812279 containerd[1555]: time="2025-05-15T12:52:35.812207195Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:35.813422 containerd[1555]: time="2025-05-15T12:52:35.813173796Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 15 12:52:35.813985 containerd[1555]: time="2025-05-15T12:52:35.813942474Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:35.816657 containerd[1555]: time="2025-05-15T12:52:35.816613145Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:35.817128 containerd[1555]: time="2025-05-15T12:52:35.817089740Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.75617002s" May 15 12:52:35.817128 containerd[1555]: time="2025-05-15T12:52:35.817127401Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 12:52:35.821642 containerd[1555]: time="2025-05-15T12:52:35.821434950Z" level=info msg="CreateContainer within sandbox \"079e6915c7372a0d8b23fad1f2d6a1e531607bab06b17c9ff50f5696e642edd9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 12:52:35.838594 containerd[1555]: time="2025-05-15T12:52:35.837845448Z" level=info msg="Container b0719da8fad8804e783a0ab693b7e7efe12701c9e96061b3d73fb5eb9a36f831: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:35.849955 containerd[1555]: time="2025-05-15T12:52:35.849908536Z" level=info msg="CreateContainer within sandbox \"079e6915c7372a0d8b23fad1f2d6a1e531607bab06b17c9ff50f5696e642edd9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b0719da8fad8804e783a0ab693b7e7efe12701c9e96061b3d73fb5eb9a36f831\"" May 15 12:52:35.850782 containerd[1555]: time="2025-05-15T12:52:35.850738155Z" level=info msg="StartContainer for \"b0719da8fad8804e783a0ab693b7e7efe12701c9e96061b3d73fb5eb9a36f831\"" May 15 12:52:35.853274 containerd[1555]: time="2025-05-15T12:52:35.853214734Z" level=info msg="connecting to shim b0719da8fad8804e783a0ab693b7e7efe12701c9e96061b3d73fb5eb9a36f831" address="unix:///run/containerd/s/8783adc6258dd24dd50b5af0160a3b0df5eb412802c016db76fae2249f2c9e43" protocol=ttrpc version=3 May 15 12:52:35.950733 systemd[1]: Started cri-containerd-b0719da8fad8804e783a0ab693b7e7efe12701c9e96061b3d73fb5eb9a36f831.scope - libcontainer container b0719da8fad8804e783a0ab693b7e7efe12701c9e96061b3d73fb5eb9a36f831. May 15 12:52:36.048666 containerd[1555]: time="2025-05-15T12:52:36.048618064Z" level=info msg="StartContainer for \"b0719da8fad8804e783a0ab693b7e7efe12701c9e96061b3d73fb5eb9a36f831\" returns successfully" May 15 12:52:36.100986 kubelet[2697]: E0515 12:52:36.100945 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:37.158142 kubelet[2697]: E0515 12:52:37.158066 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:37.168007 kubelet[2697]: I0515 12:52:37.167787 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-rd6lw" podStartSLOduration=3.408550977 podStartE2EDuration="6.167766705s" podCreationTimestamp="2025-05-15 12:52:31 +0000 UTC" firstStartedPulling="2025-05-15 12:52:33.059292089 +0000 UTC m=+7.120450517" lastFinishedPulling="2025-05-15 12:52:35.818507827 +0000 UTC m=+9.879666245" observedRunningTime="2025-05-15 12:52:36.125341657 +0000 UTC m=+10.186500075" watchObservedRunningTime="2025-05-15 12:52:37.167766705 +0000 UTC m=+11.228925123" May 15 12:52:38.104438 kubelet[2697]: E0515 12:52:38.104355 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:39.141625 systemd[1]: Created slice kubepods-besteffort-podc040ea10_a6a4_4ebe_bdfa_0023f6fe49e8.slice - libcontainer container kubepods-besteffort-podc040ea10_a6a4_4ebe_bdfa_0023f6fe49e8.slice. May 15 12:52:39.188689 systemd[1]: Created slice kubepods-besteffort-podbb718bd4_90ab_4183_91f8_0d4b9a2bab80.slice - libcontainer container kubepods-besteffort-podbb718bd4_90ab_4183_91f8_0d4b9a2bab80.slice. May 15 12:52:39.229877 kubelet[2697]: I0515 12:52:39.229820 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8-tigera-ca-bundle\") pod \"calico-typha-79c5f7d894-hxzff\" (UID: \"c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8\") " pod="calico-system/calico-typha-79c5f7d894-hxzff" May 15 12:52:39.229877 kubelet[2697]: I0515 12:52:39.229866 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-cni-log-dir\") pod \"calico-node-mvhbh\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " pod="calico-system/calico-node-mvhbh" May 15 12:52:39.229877 kubelet[2697]: I0515 12:52:39.229894 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-flexvol-driver-host\") pod \"calico-node-mvhbh\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " pod="calico-system/calico-node-mvhbh" May 15 12:52:39.230413 kubelet[2697]: I0515 12:52:39.229915 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-xtables-lock\") pod \"calico-node-mvhbh\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " pod="calico-system/calico-node-mvhbh" May 15 12:52:39.230413 kubelet[2697]: I0515 12:52:39.229931 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-tigera-ca-bundle\") pod \"calico-node-mvhbh\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " pod="calico-system/calico-node-mvhbh" May 15 12:52:39.230413 kubelet[2697]: I0515 12:52:39.229946 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8-typha-certs\") pod \"calico-typha-79c5f7d894-hxzff\" (UID: \"c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8\") " pod="calico-system/calico-typha-79c5f7d894-hxzff" May 15 12:52:39.230413 kubelet[2697]: I0515 12:52:39.229961 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-policysync\") pod \"calico-node-mvhbh\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " pod="calico-system/calico-node-mvhbh" May 15 12:52:39.230413 kubelet[2697]: I0515 12:52:39.229978 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-cni-net-dir\") pod \"calico-node-mvhbh\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " pod="calico-system/calico-node-mvhbh" May 15 12:52:39.230520 kubelet[2697]: I0515 12:52:39.229993 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d96lv\" (UniqueName: \"kubernetes.io/projected/c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8-kube-api-access-d96lv\") pod \"calico-typha-79c5f7d894-hxzff\" (UID: \"c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8\") " pod="calico-system/calico-typha-79c5f7d894-hxzff" May 15 12:52:39.230520 kubelet[2697]: I0515 12:52:39.230008 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-node-certs\") pod \"calico-node-mvhbh\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " pod="calico-system/calico-node-mvhbh" May 15 12:52:39.230520 kubelet[2697]: I0515 12:52:39.230022 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-cni-bin-dir\") pod \"calico-node-mvhbh\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " pod="calico-system/calico-node-mvhbh" May 15 12:52:39.230520 kubelet[2697]: I0515 12:52:39.230040 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-var-run-calico\") pod \"calico-node-mvhbh\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " pod="calico-system/calico-node-mvhbh" May 15 12:52:39.230520 kubelet[2697]: I0515 12:52:39.230059 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6q6f\" (UniqueName: \"kubernetes.io/projected/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-kube-api-access-k6q6f\") pod \"calico-node-mvhbh\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " pod="calico-system/calico-node-mvhbh" May 15 12:52:39.230680 kubelet[2697]: I0515 12:52:39.230072 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-lib-modules\") pod \"calico-node-mvhbh\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " pod="calico-system/calico-node-mvhbh" May 15 12:52:39.230680 kubelet[2697]: I0515 12:52:39.230087 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-var-lib-calico\") pod \"calico-node-mvhbh\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " pod="calico-system/calico-node-mvhbh" May 15 12:52:39.268672 kubelet[2697]: E0515 12:52:39.268615 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:39.374768 kubelet[2697]: E0515 12:52:39.355839 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.374768 kubelet[2697]: W0515 12:52:39.355864 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.374768 kubelet[2697]: E0515 12:52:39.355905 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.374768 kubelet[2697]: E0515 12:52:39.359386 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:52:39.374768 kubelet[2697]: E0515 12:52:39.368615 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.374768 kubelet[2697]: W0515 12:52:39.368631 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.374768 kubelet[2697]: E0515 12:52:39.368649 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.374768 kubelet[2697]: E0515 12:52:39.372736 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.374768 kubelet[2697]: W0515 12:52:39.372750 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.375072 kubelet[2697]: E0515 12:52:39.372767 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.389288 kubelet[2697]: E0515 12:52:39.389244 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.389652 kubelet[2697]: W0515 12:52:39.389623 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.389995 kubelet[2697]: E0515 12:52:39.389892 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.397254 kubelet[2697]: E0515 12:52:39.397139 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.397254 kubelet[2697]: W0515 12:52:39.397162 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.397254 kubelet[2697]: E0515 12:52:39.397183 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.421259 kubelet[2697]: E0515 12:52:39.421206 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.421602 kubelet[2697]: W0515 12:52:39.421527 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.421826 kubelet[2697]: E0515 12:52:39.421715 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.423073 kubelet[2697]: E0515 12:52:39.423043 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.423073 kubelet[2697]: W0515 12:52:39.423064 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.425167 kubelet[2697]: E0515 12:52:39.423083 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.426408 kubelet[2697]: E0515 12:52:39.426378 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.426456 kubelet[2697]: W0515 12:52:39.426411 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.426456 kubelet[2697]: E0515 12:52:39.426431 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.535418 kubelet[2697]: E0515 12:52:39.427134 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.535418 kubelet[2697]: W0515 12:52:39.427153 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.535418 kubelet[2697]: E0515 12:52:39.427290 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.535418 kubelet[2697]: E0515 12:52:39.428875 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.535418 kubelet[2697]: W0515 12:52:39.428995 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.535418 kubelet[2697]: E0515 12:52:39.429012 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.535418 kubelet[2697]: E0515 12:52:39.429766 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.535418 kubelet[2697]: W0515 12:52:39.429776 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.535418 kubelet[2697]: E0515 12:52:39.429787 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.535418 kubelet[2697]: E0515 12:52:39.431763 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.536030 containerd[1555]: time="2025-05-15T12:52:39.470840598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79c5f7d894-hxzff,Uid:c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8,Namespace:calico-system,Attempt:0,}" May 15 12:52:39.536407 kubelet[2697]: W0515 12:52:39.431773 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.536407 kubelet[2697]: E0515 12:52:39.431785 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.536407 kubelet[2697]: E0515 12:52:39.432002 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.536407 kubelet[2697]: W0515 12:52:39.432010 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.536407 kubelet[2697]: E0515 12:52:39.432019 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.536407 kubelet[2697]: E0515 12:52:39.432250 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.536407 kubelet[2697]: W0515 12:52:39.432259 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.536407 kubelet[2697]: E0515 12:52:39.432285 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.536407 kubelet[2697]: E0515 12:52:39.432473 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.536407 kubelet[2697]: W0515 12:52:39.432480 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.536782 kubelet[2697]: E0515 12:52:39.432488 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.536782 kubelet[2697]: E0515 12:52:39.432723 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.536782 kubelet[2697]: W0515 12:52:39.432734 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.536782 kubelet[2697]: E0515 12:52:39.432771 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.536782 kubelet[2697]: E0515 12:52:39.433804 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.536782 kubelet[2697]: W0515 12:52:39.433812 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.536782 kubelet[2697]: E0515 12:52:39.433821 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.536782 kubelet[2697]: E0515 12:52:39.434042 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.536782 kubelet[2697]: W0515 12:52:39.434050 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.536782 kubelet[2697]: E0515 12:52:39.434058 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537038 kubelet[2697]: E0515 12:52:39.434583 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537038 kubelet[2697]: W0515 12:52:39.434592 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537038 kubelet[2697]: E0515 12:52:39.434600 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537038 kubelet[2697]: E0515 12:52:39.435628 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537038 kubelet[2697]: W0515 12:52:39.435637 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537038 kubelet[2697]: E0515 12:52:39.435646 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537038 kubelet[2697]: E0515 12:52:39.435861 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537038 kubelet[2697]: W0515 12:52:39.435868 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537038 kubelet[2697]: E0515 12:52:39.435877 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537038 kubelet[2697]: E0515 12:52:39.436219 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537285 kubelet[2697]: W0515 12:52:39.436227 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537285 kubelet[2697]: E0515 12:52:39.436235 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537285 kubelet[2697]: E0515 12:52:39.437294 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537285 kubelet[2697]: W0515 12:52:39.437303 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537285 kubelet[2697]: E0515 12:52:39.437311 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537285 kubelet[2697]: E0515 12:52:39.437468 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537285 kubelet[2697]: W0515 12:52:39.437475 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537285 kubelet[2697]: E0515 12:52:39.437486 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537285 kubelet[2697]: E0515 12:52:39.437712 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537285 kubelet[2697]: W0515 12:52:39.437722 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537520 kubelet[2697]: E0515 12:52:39.437729 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537520 kubelet[2697]: E0515 12:52:39.438803 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537520 kubelet[2697]: W0515 12:52:39.438813 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537520 kubelet[2697]: E0515 12:52:39.438822 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537520 kubelet[2697]: I0515 12:52:39.438852 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c0d8dc71-c387-4c70-bebd-31f74a7e6218-socket-dir\") pod \"csi-node-driver-nq42m\" (UID: \"c0d8dc71-c387-4c70-bebd-31f74a7e6218\") " pod="calico-system/csi-node-driver-nq42m" May 15 12:52:39.537520 kubelet[2697]: E0515 12:52:39.439585 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537520 kubelet[2697]: W0515 12:52:39.439596 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537520 kubelet[2697]: E0515 12:52:39.439649 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537753 kubelet[2697]: I0515 12:52:39.439668 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c0d8dc71-c387-4c70-bebd-31f74a7e6218-registration-dir\") pod \"csi-node-driver-nq42m\" (UID: \"c0d8dc71-c387-4c70-bebd-31f74a7e6218\") " pod="calico-system/csi-node-driver-nq42m" May 15 12:52:39.537753 kubelet[2697]: E0515 12:52:39.441802 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537753 kubelet[2697]: W0515 12:52:39.441809 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537753 kubelet[2697]: E0515 12:52:39.441823 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537753 kubelet[2697]: E0515 12:52:39.442014 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537753 kubelet[2697]: W0515 12:52:39.442022 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537753 kubelet[2697]: E0515 12:52:39.442043 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537753 kubelet[2697]: E0515 12:52:39.442217 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537753 kubelet[2697]: W0515 12:52:39.442225 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537993 kubelet[2697]: E0515 12:52:39.442251 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537993 kubelet[2697]: I0515 12:52:39.442278 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c0d8dc71-c387-4c70-bebd-31f74a7e6218-varrun\") pod \"csi-node-driver-nq42m\" (UID: \"c0d8dc71-c387-4c70-bebd-31f74a7e6218\") " pod="calico-system/csi-node-driver-nq42m" May 15 12:52:39.537993 kubelet[2697]: E0515 12:52:39.442468 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537993 kubelet[2697]: W0515 12:52:39.442478 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537993 kubelet[2697]: E0515 12:52:39.442490 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537993 kubelet[2697]: E0515 12:52:39.442668 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.537993 kubelet[2697]: W0515 12:52:39.442699 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.537993 kubelet[2697]: E0515 12:52:39.442718 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.537993 kubelet[2697]: E0515 12:52:39.442895 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.538218 kubelet[2697]: W0515 12:52:39.442904 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.538218 kubelet[2697]: E0515 12:52:39.469986 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:39.538218 kubelet[2697]: E0515 12:52:39.471197 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.538218 kubelet[2697]: W0515 12:52:39.471210 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.538218 kubelet[2697]: E0515 12:52:39.471224 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.538218 kubelet[2697]: E0515 12:52:39.471393 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.538218 kubelet[2697]: W0515 12:52:39.471427 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.538218 kubelet[2697]: E0515 12:52:39.471438 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.538218 kubelet[2697]: E0515 12:52:39.471459 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.538418 kubelet[2697]: I0515 12:52:39.471480 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0d8dc71-c387-4c70-bebd-31f74a7e6218-kubelet-dir\") pod \"csi-node-driver-nq42m\" (UID: \"c0d8dc71-c387-4c70-bebd-31f74a7e6218\") " pod="calico-system/csi-node-driver-nq42m" May 15 12:52:39.538418 kubelet[2697]: E0515 12:52:39.471777 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.538418 kubelet[2697]: W0515 12:52:39.471786 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.538418 kubelet[2697]: E0515 12:52:39.471794 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.538418 kubelet[2697]: I0515 12:52:39.471809 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7nz\" (UniqueName: \"kubernetes.io/projected/c0d8dc71-c387-4c70-bebd-31f74a7e6218-kube-api-access-wf7nz\") pod \"csi-node-driver-nq42m\" (UID: \"c0d8dc71-c387-4c70-bebd-31f74a7e6218\") " pod="calico-system/csi-node-driver-nq42m" May 15 12:52:39.538418 kubelet[2697]: E0515 12:52:39.472536 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.538418 kubelet[2697]: W0515 12:52:39.472545 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.538418 kubelet[2697]: E0515 12:52:39.472675 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.545772 kubelet[2697]: E0515 12:52:39.545728 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.545772 kubelet[2697]: W0515 12:52:39.545767 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.545864 kubelet[2697]: E0515 12:52:39.545798 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.546032 kubelet[2697]: E0515 12:52:39.546001 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:39.546783 containerd[1555]: time="2025-05-15T12:52:39.546736389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mvhbh,Uid:bb718bd4-90ab-4183-91f8-0d4b9a2bab80,Namespace:calico-system,Attempt:0,}" May 15 12:52:39.547332 kubelet[2697]: E0515 12:52:39.547306 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.547332 kubelet[2697]: W0515 12:52:39.547325 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.547390 kubelet[2697]: E0515 12:52:39.547335 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.547804 kubelet[2697]: E0515 12:52:39.547780 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.547831 kubelet[2697]: W0515 12:52:39.547810 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.547831 kubelet[2697]: E0515 12:52:39.547822 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.573657 kubelet[2697]: E0515 12:52:39.573303 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.573657 kubelet[2697]: W0515 12:52:39.573335 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.573657 kubelet[2697]: E0515 12:52:39.573369 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.575042 kubelet[2697]: E0515 12:52:39.574795 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.575042 kubelet[2697]: W0515 12:52:39.574917 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.575042 kubelet[2697]: E0515 12:52:39.574951 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.602958 kubelet[2697]: E0515 12:52:39.596646 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.602958 kubelet[2697]: W0515 12:52:39.596776 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.602958 kubelet[2697]: E0515 12:52:39.597107 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.602958 kubelet[2697]: W0515 12:52:39.597116 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.602958 kubelet[2697]: E0515 12:52:39.597352 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.602958 kubelet[2697]: W0515 12:52:39.597361 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.602958 kubelet[2697]: E0515 12:52:39.597379 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.602958 kubelet[2697]: E0515 12:52:39.597627 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.602958 kubelet[2697]: E0515 12:52:39.597698 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.602958 kubelet[2697]: E0515 12:52:39.597788 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.603384 kubelet[2697]: W0515 12:52:39.597795 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.603384 kubelet[2697]: E0515 12:52:39.597822 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.603384 kubelet[2697]: E0515 12:52:39.598092 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.603384 kubelet[2697]: W0515 12:52:39.598125 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.603384 kubelet[2697]: E0515 12:52:39.598137 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.603384 kubelet[2697]: E0515 12:52:39.598646 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.603384 kubelet[2697]: W0515 12:52:39.598655 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.603384 kubelet[2697]: E0515 12:52:39.598677 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.603384 kubelet[2697]: E0515 12:52:39.599222 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.603384 kubelet[2697]: W0515 12:52:39.599231 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.603607 kubelet[2697]: E0515 12:52:39.599252 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.603607 kubelet[2697]: E0515 12:52:39.599469 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.603607 kubelet[2697]: W0515 12:52:39.599479 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.603607 kubelet[2697]: E0515 12:52:39.599567 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.603607 kubelet[2697]: E0515 12:52:39.599749 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.603607 kubelet[2697]: W0515 12:52:39.599756 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.603607 kubelet[2697]: E0515 12:52:39.600494 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.603607 kubelet[2697]: E0515 12:52:39.600647 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.603607 kubelet[2697]: W0515 12:52:39.600655 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.603607 kubelet[2697]: E0515 12:52:39.600763 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.603791 kubelet[2697]: E0515 12:52:39.600815 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.603791 kubelet[2697]: W0515 12:52:39.600822 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.603791 kubelet[2697]: E0515 12:52:39.600903 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.603791 kubelet[2697]: E0515 12:52:39.601009 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.603791 kubelet[2697]: W0515 12:52:39.601016 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.603791 kubelet[2697]: E0515 12:52:39.601095 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.603791 kubelet[2697]: E0515 12:52:39.601201 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.603791 kubelet[2697]: W0515 12:52:39.601208 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.603791 kubelet[2697]: E0515 12:52:39.601232 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.603791 kubelet[2697]: E0515 12:52:39.601394 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.603964 kubelet[2697]: W0515 12:52:39.601404 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.603964 kubelet[2697]: E0515 12:52:39.601424 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.603964 kubelet[2697]: E0515 12:52:39.601648 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.603964 kubelet[2697]: W0515 12:52:39.601656 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.603964 kubelet[2697]: E0515 12:52:39.601667 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.603964 kubelet[2697]: E0515 12:52:39.602597 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.603964 kubelet[2697]: W0515 12:52:39.602606 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.603964 kubelet[2697]: E0515 12:52:39.602627 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.604582 kubelet[2697]: E0515 12:52:39.604484 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.604582 kubelet[2697]: W0515 12:52:39.604496 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.604688 kubelet[2697]: E0515 12:52:39.604662 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.607265 kubelet[2697]: E0515 12:52:39.605083 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.607265 kubelet[2697]: W0515 12:52:39.605094 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.607265 kubelet[2697]: E0515 12:52:39.605379 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.607265 kubelet[2697]: E0515 12:52:39.605523 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.607265 kubelet[2697]: W0515 12:52:39.605530 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.607265 kubelet[2697]: E0515 12:52:39.605583 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.607265 kubelet[2697]: E0515 12:52:39.605794 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.607265 kubelet[2697]: W0515 12:52:39.605802 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.607265 kubelet[2697]: E0515 12:52:39.605915 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.607265 kubelet[2697]: E0515 12:52:39.606186 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.607474 kubelet[2697]: W0515 12:52:39.606194 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.607474 kubelet[2697]: E0515 12:52:39.606488 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.607474 kubelet[2697]: E0515 12:52:39.606590 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.607474 kubelet[2697]: W0515 12:52:39.606598 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.607474 kubelet[2697]: E0515 12:52:39.606606 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.607474 kubelet[2697]: E0515 12:52:39.606877 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.607474 kubelet[2697]: W0515 12:52:39.606887 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.607474 kubelet[2697]: E0515 12:52:39.606895 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.683453 kubelet[2697]: E0515 12:52:39.683342 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 12:52:39.683711 kubelet[2697]: W0515 12:52:39.683690 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 12:52:39.683795 kubelet[2697]: E0515 12:52:39.683781 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 12:52:39.689082 containerd[1555]: time="2025-05-15T12:52:39.688951176Z" level=info msg="connecting to shim 1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6" address="unix:///run/containerd/s/b0895005c6b962f44eb93a4ce11f0d2b75f7c68f2a12f151760fec1e6f404ad5" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:39.699990 containerd[1555]: time="2025-05-15T12:52:39.699916113Z" level=info msg="connecting to shim 49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea" address="unix:///run/containerd/s/1390c37b97e46f840f59fc912c9a1e991a1fef46e37e21eacbc9ed08e0c51bde" namespace=k8s.io protocol=ttrpc version=3 May 15 12:52:39.940650 systemd[1]: Started cri-containerd-1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6.scope - libcontainer container 1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6. May 15 12:52:39.953672 systemd[1]: Started cri-containerd-49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea.scope - libcontainer container 49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea. May 15 12:52:40.098809 containerd[1555]: time="2025-05-15T12:52:40.098742243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mvhbh,Uid:bb718bd4-90ab-4183-91f8-0d4b9a2bab80,Namespace:calico-system,Attempt:0,} returns sandbox id \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\"" May 15 12:52:40.099750 kubelet[2697]: E0515 12:52:40.099724 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:40.101548 containerd[1555]: time="2025-05-15T12:52:40.101478725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 12:52:40.140723 containerd[1555]: time="2025-05-15T12:52:40.140654330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79c5f7d894-hxzff,Uid:c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\"" May 15 12:52:40.141791 kubelet[2697]: E0515 12:52:40.141758 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:41.026080 kubelet[2697]: E0515 12:52:41.026001 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:52:42.668095 update_engine[1532]: I20250515 12:52:42.668007 1532 update_attempter.cc:509] Updating boot flags... May 15 12:52:43.030478 kubelet[2697]: E0515 12:52:43.027878 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:52:43.974105 containerd[1555]: time="2025-05-15T12:52:43.974054665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:43.975007 containerd[1555]: time="2025-05-15T12:52:43.974658639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 15 12:52:43.975598 containerd[1555]: time="2025-05-15T12:52:43.975397904Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:43.976720 containerd[1555]: time="2025-05-15T12:52:43.976685993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:43.977465 containerd[1555]: time="2025-05-15T12:52:43.977192876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 3.87566028s" May 15 12:52:43.977465 containerd[1555]: time="2025-05-15T12:52:43.977222966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 15 12:52:43.979062 containerd[1555]: time="2025-05-15T12:52:43.979037779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 12:52:43.980218 containerd[1555]: time="2025-05-15T12:52:43.980189307Z" level=info msg="CreateContainer within sandbox \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 12:52:43.987928 containerd[1555]: time="2025-05-15T12:52:43.987849529Z" level=info msg="Container e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:44.013970 containerd[1555]: time="2025-05-15T12:52:44.013915951Z" level=info msg="CreateContainer within sandbox \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\"" May 15 12:52:44.016298 containerd[1555]: time="2025-05-15T12:52:44.016263136Z" level=info msg="StartContainer for \"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\"" May 15 12:52:44.017817 containerd[1555]: time="2025-05-15T12:52:44.017778346Z" level=info msg="connecting to shim e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7" address="unix:///run/containerd/s/1390c37b97e46f840f59fc912c9a1e991a1fef46e37e21eacbc9ed08e0c51bde" protocol=ttrpc version=3 May 15 12:52:44.045711 systemd[1]: Started cri-containerd-e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7.scope - libcontainer container e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7. May 15 12:52:44.215532 containerd[1555]: time="2025-05-15T12:52:44.215453981Z" level=info msg="StartContainer for \"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\" returns successfully" May 15 12:52:44.254072 systemd[1]: cri-containerd-e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7.scope: Deactivated successfully. May 15 12:52:44.259240 containerd[1555]: time="2025-05-15T12:52:44.259168860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\" id:\"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\" pid:3302 exited_at:{seconds:1747313564 nanos:258443216}" May 15 12:52:44.259240 containerd[1555]: time="2025-05-15T12:52:44.259137000Z" level=info msg="received exit event container_id:\"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\" id:\"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\" pid:3302 exited_at:{seconds:1747313564 nanos:258443216}" May 15 12:52:44.283775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7-rootfs.mount: Deactivated successfully. May 15 12:52:45.025984 kubelet[2697]: E0515 12:52:45.025902 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:52:45.124865 kubelet[2697]: E0515 12:52:45.124821 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:47.025603 kubelet[2697]: E0515 12:52:47.025222 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:52:49.026295 kubelet[2697]: E0515 12:52:49.026246 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:52:50.879437 containerd[1555]: time="2025-05-15T12:52:50.879340909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:50.880621 containerd[1555]: time="2025-05-15T12:52:50.880442763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 15 12:52:50.881097 containerd[1555]: time="2025-05-15T12:52:50.881061966Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:50.882744 containerd[1555]: time="2025-05-15T12:52:50.882704323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:52:50.883321 containerd[1555]: time="2025-05-15T12:52:50.883287246Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 6.904217817s" May 15 12:52:50.883395 containerd[1555]: time="2025-05-15T12:52:50.883381296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 15 12:52:50.884877 containerd[1555]: time="2025-05-15T12:52:50.884832502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 12:52:50.905433 containerd[1555]: time="2025-05-15T12:52:50.904758849Z" level=info msg="CreateContainer within sandbox \"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 12:52:50.911826 containerd[1555]: time="2025-05-15T12:52:50.911758149Z" level=info msg="Container 80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5: CDI devices from CRI Config.CDIDevices: []" May 15 12:52:50.916299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270572206.mount: Deactivated successfully. May 15 12:52:50.920943 containerd[1555]: time="2025-05-15T12:52:50.920912449Z" level=info msg="CreateContainer within sandbox \"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\"" May 15 12:52:50.921506 containerd[1555]: time="2025-05-15T12:52:50.921420421Z" level=info msg="StartContainer for \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\"" May 15 12:52:50.922480 containerd[1555]: time="2025-05-15T12:52:50.922434106Z" level=info msg="connecting to shim 80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5" address="unix:///run/containerd/s/b0895005c6b962f44eb93a4ce11f0d2b75f7c68f2a12f151760fec1e6f404ad5" protocol=ttrpc version=3 May 15 12:52:50.960828 systemd[1]: Started cri-containerd-80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5.scope - libcontainer container 80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5. May 15 12:52:51.026590 kubelet[2697]: E0515 12:52:51.026248 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:52:51.063748 containerd[1555]: time="2025-05-15T12:52:51.063677162Z" level=info msg="StartContainer for \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\" returns successfully" May 15 12:52:51.144736 kubelet[2697]: E0515 12:52:51.144530 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:51.170357 kubelet[2697]: I0515 12:52:51.170052 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79c5f7d894-hxzff" podStartSLOduration=1.428396653 podStartE2EDuration="12.170034796s" podCreationTimestamp="2025-05-15 12:52:39 +0000 UTC" firstStartedPulling="2025-05-15 12:52:40.142870628 +0000 UTC m=+14.204029046" lastFinishedPulling="2025-05-15 12:52:50.884508771 +0000 UTC m=+24.945667189" observedRunningTime="2025-05-15 12:52:51.169797825 +0000 UTC m=+25.230956253" watchObservedRunningTime="2025-05-15 12:52:51.170034796 +0000 UTC m=+25.231193214" May 15 12:52:52.146431 kubelet[2697]: I0515 12:52:52.145722 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:52:52.146431 kubelet[2697]: E0515 12:52:52.146059 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:52:53.026202 kubelet[2697]: E0515 12:52:53.026165 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:52:55.025758 kubelet[2697]: E0515 12:52:55.025654 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:52:57.025756 kubelet[2697]: E0515 12:52:57.025674 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:52:59.025456 kubelet[2697]: E0515 12:52:59.025389 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:53:01.025773 kubelet[2697]: E0515 12:53:01.025699 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:53:03.025749 kubelet[2697]: E0515 12:53:03.025705 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:53:04.462133 kubelet[2697]: I0515 12:53:04.462069 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:53:04.465087 kubelet[2697]: E0515 12:53:04.465029 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:05.026134 kubelet[2697]: E0515 12:53:05.026066 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:53:05.174427 kubelet[2697]: E0515 12:53:05.170653 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:07.026614 kubelet[2697]: E0515 12:53:07.025929 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:53:08.970847 containerd[1555]: time="2025-05-15T12:53:08.970796153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:08.971797 containerd[1555]: time="2025-05-15T12:53:08.971676424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 15 12:53:08.972537 containerd[1555]: time="2025-05-15T12:53:08.972509045Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:08.974672 containerd[1555]: time="2025-05-15T12:53:08.974645508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:08.975592 containerd[1555]: time="2025-05-15T12:53:08.975542619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 18.090676236s" May 15 12:53:08.975673 containerd[1555]: time="2025-05-15T12:53:08.975656669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 15 12:53:08.980174 containerd[1555]: time="2025-05-15T12:53:08.980149325Z" level=info msg="CreateContainer within sandbox \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 12:53:08.988599 containerd[1555]: time="2025-05-15T12:53:08.986859764Z" level=info msg="Container 174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:09.008062 containerd[1555]: time="2025-05-15T12:53:09.008019693Z" level=info msg="CreateContainer within sandbox \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\"" May 15 12:53:09.008856 containerd[1555]: time="2025-05-15T12:53:09.008669343Z" level=info msg="StartContainer for \"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\"" May 15 12:53:09.010414 containerd[1555]: time="2025-05-15T12:53:09.010393396Z" level=info msg="connecting to shim 174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2" address="unix:///run/containerd/s/1390c37b97e46f840f59fc912c9a1e991a1fef46e37e21eacbc9ed08e0c51bde" protocol=ttrpc version=3 May 15 12:53:09.030589 kubelet[2697]: E0515 12:53:09.026122 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:53:09.064705 systemd[1]: Started cri-containerd-174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2.scope - libcontainer container 174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2. May 15 12:53:09.141873 containerd[1555]: time="2025-05-15T12:53:09.141778963Z" level=info msg="StartContainer for \"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\" returns successfully" May 15 12:53:09.183493 kubelet[2697]: E0515 12:53:09.183453 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:10.280578 kubelet[2697]: E0515 12:53:10.185909 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:11.034775 kubelet[2697]: E0515 12:53:11.033303 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:53:11.735888 containerd[1555]: time="2025-05-15T12:53:11.735778677Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 12:53:11.740248 systemd[1]: cri-containerd-174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2.scope: Deactivated successfully. May 15 12:53:11.740636 systemd[1]: cri-containerd-174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2.scope: Consumed 2.509s CPU time, 176.1M memory peak, 154M written to disk. May 15 12:53:11.742635 containerd[1555]: time="2025-05-15T12:53:11.741933454Z" level=info msg="received exit event container_id:\"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\" id:\"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\" pid:3404 exited_at:{seconds:1747313591 nanos:741371834}" May 15 12:53:11.742635 containerd[1555]: time="2025-05-15T12:53:11.742237555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\" id:\"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\" pid:3404 exited_at:{seconds:1747313591 nanos:741371834}" May 15 12:53:11.776881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2-rootfs.mount: Deactivated successfully. May 15 12:53:11.782361 kubelet[2697]: I0515 12:53:11.782107 2697 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 12:53:11.844355 systemd[1]: Created slice kubepods-burstable-pod47aeb2aa_cfcd_4701_8f9f_c898edfab234.slice - libcontainer container kubepods-burstable-pod47aeb2aa_cfcd_4701_8f9f_c898edfab234.slice. May 15 12:53:11.853491 kubelet[2697]: W0515 12:53:11.853431 2697 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-236-126-108" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node '172-236-126-108' and this object May 15 12:53:11.853825 kubelet[2697]: E0515 12:53:11.853471 2697 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-236-126-108\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node '172-236-126-108' and this object" logger="UnhandledError" May 15 12:53:11.853967 kubelet[2697]: W0515 12:53:11.853942 2697 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:172-236-126-108" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node '172-236-126-108' and this object May 15 12:53:11.855446 kubelet[2697]: E0515 12:53:11.855391 2697 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:172-236-126-108\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node '172-236-126-108' and this object" logger="UnhandledError" May 15 12:53:11.862032 systemd[1]: Created slice kubepods-besteffort-poda3c86fa8_07d6_4bd0_ba95_5246fc2365f5.slice - libcontainer container kubepods-besteffort-poda3c86fa8_07d6_4bd0_ba95_5246fc2365f5.slice. May 15 12:53:11.872185 systemd[1]: Created slice kubepods-besteffort-pod72f594de_0445_4674_8b32_ccb3305262a8.slice - libcontainer container kubepods-besteffort-pod72f594de_0445_4674_8b32_ccb3305262a8.slice. May 15 12:53:11.880866 systemd[1]: Created slice kubepods-besteffort-pod76049a04_26ee_4fa9_afd5_5ad317529d27.slice - libcontainer container kubepods-besteffort-pod76049a04_26ee_4fa9_afd5_5ad317529d27.slice. May 15 12:53:11.889619 systemd[1]: Created slice kubepods-besteffort-pod9698ee50_755f_43e4_a451_771820b74a00.slice - libcontainer container kubepods-besteffort-pod9698ee50_755f_43e4_a451_771820b74a00.slice. May 15 12:53:11.900642 systemd[1]: Created slice kubepods-burstable-podfac0c8a1_1f04_47ec_86ff_e7aca09e7dbe.slice - libcontainer container kubepods-burstable-podfac0c8a1_1f04_47ec_86ff_e7aca09e7dbe.slice. May 15 12:53:11.965357 kubelet[2697]: I0515 12:53:11.965312 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9mzp\" (UniqueName: \"kubernetes.io/projected/47aeb2aa-cfcd-4701-8f9f-c898edfab234-kube-api-access-z9mzp\") pod \"coredns-668d6bf9bc-jq7vf\" (UID: \"47aeb2aa-cfcd-4701-8f9f-c898edfab234\") " pod="kube-system/coredns-668d6bf9bc-jq7vf" May 15 12:53:11.965529 kubelet[2697]: I0515 12:53:11.965405 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtdm7\" (UniqueName: \"kubernetes.io/projected/9698ee50-755f-43e4-a451-771820b74a00-kube-api-access-rtdm7\") pod \"calico-apiserver-5d86d7c9bb-64dfc\" (UID: \"9698ee50-755f-43e4-a451-771820b74a00\") " pod="calico-apiserver/calico-apiserver-5d86d7c9bb-64dfc" May 15 12:53:11.965529 kubelet[2697]: I0515 12:53:11.965430 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbgzc\" (UniqueName: \"kubernetes.io/projected/fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe-kube-api-access-lbgzc\") pod \"coredns-668d6bf9bc-grpr6\" (UID: \"fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe\") " pod="kube-system/coredns-668d6bf9bc-grpr6" May 15 12:53:11.965529 kubelet[2697]: I0515 12:53:11.965505 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47aeb2aa-cfcd-4701-8f9f-c898edfab234-config-volume\") pod \"coredns-668d6bf9bc-jq7vf\" (UID: \"47aeb2aa-cfcd-4701-8f9f-c898edfab234\") " pod="kube-system/coredns-668d6bf9bc-jq7vf" May 15 12:53:11.965529 kubelet[2697]: I0515 12:53:11.965526 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/72f594de-0445-4674-8b32-ccb3305262a8-calico-apiserver-certs\") pod \"calico-apiserver-5d86d7c9bb-95bdc\" (UID: \"72f594de-0445-4674-8b32-ccb3305262a8\") " pod="calico-apiserver/calico-apiserver-5d86d7c9bb-95bdc" May 15 12:53:11.965736 kubelet[2697]: I0515 12:53:11.965617 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a3c86fa8-07d6-4bd0-ba95-5246fc2365f5-calico-apiserver-certs\") pod \"calico-apiserver-86b45b489c-mh8vn\" (UID: \"a3c86fa8-07d6-4bd0-ba95-5246fc2365f5\") " pod="calico-apiserver/calico-apiserver-86b45b489c-mh8vn" May 15 12:53:11.965736 kubelet[2697]: I0515 12:53:11.965673 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe-config-volume\") pod \"coredns-668d6bf9bc-grpr6\" (UID: \"fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe\") " pod="kube-system/coredns-668d6bf9bc-grpr6" May 15 12:53:11.965736 kubelet[2697]: I0515 12:53:11.965691 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cscg4\" (UniqueName: \"kubernetes.io/projected/a3c86fa8-07d6-4bd0-ba95-5246fc2365f5-kube-api-access-cscg4\") pod \"calico-apiserver-86b45b489c-mh8vn\" (UID: \"a3c86fa8-07d6-4bd0-ba95-5246fc2365f5\") " pod="calico-apiserver/calico-apiserver-86b45b489c-mh8vn" May 15 12:53:11.965849 kubelet[2697]: I0515 12:53:11.965715 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76049a04-26ee-4fa9-afd5-5ad317529d27-tigera-ca-bundle\") pod \"calico-kube-controllers-699d85858d-pssr6\" (UID: \"76049a04-26ee-4fa9-afd5-5ad317529d27\") " pod="calico-system/calico-kube-controllers-699d85858d-pssr6" May 15 12:53:11.965849 kubelet[2697]: I0515 12:53:11.965764 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9698ee50-755f-43e4-a451-771820b74a00-calico-apiserver-certs\") pod \"calico-apiserver-5d86d7c9bb-64dfc\" (UID: \"9698ee50-755f-43e4-a451-771820b74a00\") " pod="calico-apiserver/calico-apiserver-5d86d7c9bb-64dfc" May 15 12:53:11.965849 kubelet[2697]: I0515 12:53:11.965783 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngsng\" (UniqueName: \"kubernetes.io/projected/72f594de-0445-4674-8b32-ccb3305262a8-kube-api-access-ngsng\") pod \"calico-apiserver-5d86d7c9bb-95bdc\" (UID: \"72f594de-0445-4674-8b32-ccb3305262a8\") " pod="calico-apiserver/calico-apiserver-5d86d7c9bb-95bdc" May 15 12:53:11.966009 kubelet[2697]: I0515 12:53:11.965966 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4wkq\" (UniqueName: \"kubernetes.io/projected/76049a04-26ee-4fa9-afd5-5ad317529d27-kube-api-access-p4wkq\") pod \"calico-kube-controllers-699d85858d-pssr6\" (UID: \"76049a04-26ee-4fa9-afd5-5ad317529d27\") " pod="calico-system/calico-kube-controllers-699d85858d-pssr6" May 15 12:53:12.153106 kubelet[2697]: E0515 12:53:12.153037 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:12.153873 containerd[1555]: time="2025-05-15T12:53:12.153815275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jq7vf,Uid:47aeb2aa-cfcd-4701-8f9f-c898edfab234,Namespace:kube-system,Attempt:0,}" May 15 12:53:12.189835 containerd[1555]: time="2025-05-15T12:53:12.189375772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699d85858d-pssr6,Uid:76049a04-26ee-4fa9-afd5-5ad317529d27,Namespace:calico-system,Attempt:0,}" May 15 12:53:12.201607 kubelet[2697]: E0515 12:53:12.201175 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:12.204738 kubelet[2697]: E0515 12:53:12.204542 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:12.204935 containerd[1555]: time="2025-05-15T12:53:12.204886189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 12:53:12.206004 containerd[1555]: time="2025-05-15T12:53:12.205949580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grpr6,Uid:fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe,Namespace:kube-system,Attempt:0,}" May 15 12:53:12.320051 containerd[1555]: time="2025-05-15T12:53:12.319956538Z" level=error msg="Failed to destroy network for sandbox \"32123a36283702e83769ea6966b4497f72abd5853083ba24d736436a7fa3b35b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:12.321752 containerd[1555]: time="2025-05-15T12:53:12.321672660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699d85858d-pssr6,Uid:76049a04-26ee-4fa9-afd5-5ad317529d27,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32123a36283702e83769ea6966b4497f72abd5853083ba24d736436a7fa3b35b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:12.322193 kubelet[2697]: E0515 12:53:12.322132 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32123a36283702e83769ea6966b4497f72abd5853083ba24d736436a7fa3b35b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:12.322614 kubelet[2697]: E0515 12:53:12.322317 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32123a36283702e83769ea6966b4497f72abd5853083ba24d736436a7fa3b35b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-699d85858d-pssr6" May 15 12:53:12.322614 kubelet[2697]: E0515 12:53:12.322362 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32123a36283702e83769ea6966b4497f72abd5853083ba24d736436a7fa3b35b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-699d85858d-pssr6" May 15 12:53:12.322614 kubelet[2697]: E0515 12:53:12.322440 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-699d85858d-pssr6_calico-system(76049a04-26ee-4fa9-afd5-5ad317529d27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-699d85858d-pssr6_calico-system(76049a04-26ee-4fa9-afd5-5ad317529d27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32123a36283702e83769ea6966b4497f72abd5853083ba24d736436a7fa3b35b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-699d85858d-pssr6" podUID="76049a04-26ee-4fa9-afd5-5ad317529d27" May 15 12:53:12.344174 containerd[1555]: time="2025-05-15T12:53:12.344093574Z" level=error msg="Failed to destroy network for sandbox \"f665ae7d394a37a3849970597091b916603db23a7c3f9912426e5387d353d37c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:12.345285 containerd[1555]: time="2025-05-15T12:53:12.345239796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jq7vf,Uid:47aeb2aa-cfcd-4701-8f9f-c898edfab234,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f665ae7d394a37a3849970597091b916603db23a7c3f9912426e5387d353d37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:12.346600 kubelet[2697]: E0515 12:53:12.345463 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f665ae7d394a37a3849970597091b916603db23a7c3f9912426e5387d353d37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:12.346600 kubelet[2697]: E0515 12:53:12.345526 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f665ae7d394a37a3849970597091b916603db23a7c3f9912426e5387d353d37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jq7vf" May 15 12:53:12.346600 kubelet[2697]: E0515 12:53:12.345572 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f665ae7d394a37a3849970597091b916603db23a7c3f9912426e5387d353d37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jq7vf" May 15 12:53:12.346713 kubelet[2697]: E0515 12:53:12.345614 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jq7vf_kube-system(47aeb2aa-cfcd-4701-8f9f-c898edfab234)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jq7vf_kube-system(47aeb2aa-cfcd-4701-8f9f-c898edfab234)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f665ae7d394a37a3849970597091b916603db23a7c3f9912426e5387d353d37c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jq7vf" podUID="47aeb2aa-cfcd-4701-8f9f-c898edfab234" May 15 12:53:12.353397 containerd[1555]: time="2025-05-15T12:53:12.353363794Z" level=error msg="Failed to destroy network for sandbox \"f01598683cec2f15dd7a6371b37c3d1854d8d22ea50753d0e365c0c3b7e16201\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:12.354243 containerd[1555]: time="2025-05-15T12:53:12.354188375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grpr6,Uid:fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f01598683cec2f15dd7a6371b37c3d1854d8d22ea50753d0e365c0c3b7e16201\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:12.354686 kubelet[2697]: E0515 12:53:12.354366 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f01598683cec2f15dd7a6371b37c3d1854d8d22ea50753d0e365c0c3b7e16201\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:12.354686 kubelet[2697]: E0515 12:53:12.354407 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f01598683cec2f15dd7a6371b37c3d1854d8d22ea50753d0e365c0c3b7e16201\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-grpr6" May 15 12:53:12.354686 kubelet[2697]: E0515 12:53:12.354425 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f01598683cec2f15dd7a6371b37c3d1854d8d22ea50753d0e365c0c3b7e16201\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-grpr6" May 15 12:53:12.354781 kubelet[2697]: E0515 12:53:12.354468 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-grpr6_kube-system(fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-grpr6_kube-system(fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f01598683cec2f15dd7a6371b37c3d1854d8d22ea50753d0e365c0c3b7e16201\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-grpr6" podUID="fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe" May 15 12:53:12.775287 systemd[1]: run-netns-cni\x2db4d2b6a1\x2dae17\x2dcb1f\x2dc25c\x2d2759b023fbb8.mount: Deactivated successfully. May 15 12:53:12.775390 systemd[1]: run-netns-cni\x2ded8f4fc9\x2dc521\x2d77f6\x2d685a\x2d46ca40ec01a9.mount: Deactivated successfully. May 15 12:53:13.031302 systemd[1]: Created slice kubepods-besteffort-podc0d8dc71_c387_4c70_bebd_31f74a7e6218.slice - libcontainer container kubepods-besteffort-podc0d8dc71_c387_4c70_bebd_31f74a7e6218.slice. May 15 12:53:13.034219 containerd[1555]: time="2025-05-15T12:53:13.034179958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nq42m,Uid:c0d8dc71-c387-4c70-bebd-31f74a7e6218,Namespace:calico-system,Attempt:0,}" May 15 12:53:13.070903 kubelet[2697]: E0515 12:53:13.069852 2697 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition May 15 12:53:13.070903 kubelet[2697]: E0515 12:53:13.069961 2697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9698ee50-755f-43e4-a451-771820b74a00-calico-apiserver-certs podName:9698ee50-755f-43e4-a451-771820b74a00 nodeName:}" failed. No retries permitted until 2025-05-15 12:53:13.569940773 +0000 UTC m=+47.631099191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/9698ee50-755f-43e4-a451-771820b74a00-calico-apiserver-certs") pod "calico-apiserver-5d86d7c9bb-64dfc" (UID: "9698ee50-755f-43e4-a451-771820b74a00") : failed to sync secret cache: timed out waiting for the condition May 15 12:53:13.071645 kubelet[2697]: E0515 12:53:13.071306 2697 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition May 15 12:53:13.071645 kubelet[2697]: E0515 12:53:13.071359 2697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72f594de-0445-4674-8b32-ccb3305262a8-calico-apiserver-certs podName:72f594de-0445-4674-8b32-ccb3305262a8 nodeName:}" failed. No retries permitted until 2025-05-15 12:53:13.571343084 +0000 UTC m=+47.632501512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/72f594de-0445-4674-8b32-ccb3305262a8-calico-apiserver-certs") pod "calico-apiserver-5d86d7c9bb-95bdc" (UID: "72f594de-0445-4674-8b32-ccb3305262a8") : failed to sync secret cache: timed out waiting for the condition May 15 12:53:13.071645 kubelet[2697]: E0515 12:53:13.071491 2697 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition May 15 12:53:13.072042 kubelet[2697]: E0515 12:53:13.071794 2697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3c86fa8-07d6-4bd0-ba95-5246fc2365f5-calico-apiserver-certs podName:a3c86fa8-07d6-4bd0-ba95-5246fc2365f5 nodeName:}" failed. No retries permitted until 2025-05-15 12:53:13.571523315 +0000 UTC m=+47.632681733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a3c86fa8-07d6-4bd0-ba95-5246fc2365f5-calico-apiserver-certs") pod "calico-apiserver-86b45b489c-mh8vn" (UID: "a3c86fa8-07d6-4bd0-ba95-5246fc2365f5") : failed to sync secret cache: timed out waiting for the condition May 15 12:53:13.101416 containerd[1555]: time="2025-05-15T12:53:13.101340844Z" level=error msg="Failed to destroy network for sandbox \"6fc606bdb3ce624d73d24ba7bfabeb0cace0dde067c3be0411fbfef4ce7acbc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:13.104052 systemd[1]: run-netns-cni\x2db8b55f88\x2d8b52\x2d72d8\x2ddd57\x2dcd2d718e8559.mount: Deactivated successfully. May 15 12:53:13.105081 containerd[1555]: time="2025-05-15T12:53:13.105040488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nq42m,Uid:c0d8dc71-c387-4c70-bebd-31f74a7e6218,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fc606bdb3ce624d73d24ba7bfabeb0cace0dde067c3be0411fbfef4ce7acbc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:13.105570 kubelet[2697]: E0515 12:53:13.105308 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fc606bdb3ce624d73d24ba7bfabeb0cace0dde067c3be0411fbfef4ce7acbc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:13.105570 kubelet[2697]: E0515 12:53:13.105405 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fc606bdb3ce624d73d24ba7bfabeb0cace0dde067c3be0411fbfef4ce7acbc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nq42m" May 15 12:53:13.105570 kubelet[2697]: E0515 12:53:13.105440 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fc606bdb3ce624d73d24ba7bfabeb0cace0dde067c3be0411fbfef4ce7acbc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nq42m" May 15 12:53:13.105705 kubelet[2697]: E0515 12:53:13.105501 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nq42m_calico-system(c0d8dc71-c387-4c70-bebd-31f74a7e6218)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nq42m_calico-system(c0d8dc71-c387-4c70-bebd-31f74a7e6218)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fc606bdb3ce624d73d24ba7bfabeb0cace0dde067c3be0411fbfef4ce7acbc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nq42m" podUID="c0d8dc71-c387-4c70-bebd-31f74a7e6218" May 15 12:53:13.670228 containerd[1555]: time="2025-05-15T12:53:13.670176134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b45b489c-mh8vn,Uid:a3c86fa8-07d6-4bd0-ba95-5246fc2365f5,Namespace:calico-apiserver,Attempt:0,}" May 15 12:53:13.681106 containerd[1555]: time="2025-05-15T12:53:13.681074185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86d7c9bb-95bdc,Uid:72f594de-0445-4674-8b32-ccb3305262a8,Namespace:calico-apiserver,Attempt:0,}" May 15 12:53:13.700590 containerd[1555]: time="2025-05-15T12:53:13.700413904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86d7c9bb-64dfc,Uid:9698ee50-755f-43e4-a451-771820b74a00,Namespace:calico-apiserver,Attempt:0,}" May 15 12:53:13.787389 containerd[1555]: time="2025-05-15T12:53:13.787310279Z" level=error msg="Failed to destroy network for sandbox \"ce7b8935b6554ef839d5bf083e5fb328c81fda98801a8c0c9f13f7134c7b387c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:13.789694 containerd[1555]: time="2025-05-15T12:53:13.789654862Z" level=error msg="Failed to destroy network for sandbox \"5e1534ba90369619f8887eba3acd18e97c0ffd605c1f24b9f7431e3eeafe3218\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:13.793047 containerd[1555]: time="2025-05-15T12:53:13.792961025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86d7c9bb-95bdc,Uid:72f594de-0445-4674-8b32-ccb3305262a8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7b8935b6554ef839d5bf083e5fb328c81fda98801a8c0c9f13f7134c7b387c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:13.794702 kubelet[2697]: E0515 12:53:13.794192 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7b8935b6554ef839d5bf083e5fb328c81fda98801a8c0c9f13f7134c7b387c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:13.794702 kubelet[2697]: E0515 12:53:13.794294 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7b8935b6554ef839d5bf083e5fb328c81fda98801a8c0c9f13f7134c7b387c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d86d7c9bb-95bdc" May 15 12:53:13.794702 kubelet[2697]: E0515 12:53:13.794316 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7b8935b6554ef839d5bf083e5fb328c81fda98801a8c0c9f13f7134c7b387c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d86d7c9bb-95bdc" May 15 12:53:13.794285 systemd[1]: run-netns-cni\x2df50b2e5d\x2d56ee\x2d1547\x2d3574\x2d822154114552.mount: Deactivated successfully. May 15 12:53:13.795101 kubelet[2697]: E0515 12:53:13.794365 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d86d7c9bb-95bdc_calico-apiserver(72f594de-0445-4674-8b32-ccb3305262a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d86d7c9bb-95bdc_calico-apiserver(72f594de-0445-4674-8b32-ccb3305262a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce7b8935b6554ef839d5bf083e5fb328c81fda98801a8c0c9f13f7134c7b387c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d86d7c9bb-95bdc" podUID="72f594de-0445-4674-8b32-ccb3305262a8" May 15 12:53:13.794399 systemd[1]: run-netns-cni\x2d1eb3c011\x2d4d7f\x2d8ece\x2d7d19\x2d607149902db5.mount: Deactivated successfully. May 15 12:53:13.796710 containerd[1555]: time="2025-05-15T12:53:13.796662119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b45b489c-mh8vn,Uid:a3c86fa8-07d6-4bd0-ba95-5246fc2365f5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1534ba90369619f8887eba3acd18e97c0ffd605c1f24b9f7431e3eeafe3218\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:13.796949 kubelet[2697]: E0515 12:53:13.796929 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1534ba90369619f8887eba3acd18e97c0ffd605c1f24b9f7431e3eeafe3218\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:13.797071 kubelet[2697]: E0515 12:53:13.797026 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1534ba90369619f8887eba3acd18e97c0ffd605c1f24b9f7431e3eeafe3218\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86b45b489c-mh8vn" May 15 12:53:13.797071 kubelet[2697]: E0515 12:53:13.797049 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1534ba90369619f8887eba3acd18e97c0ffd605c1f24b9f7431e3eeafe3218\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86b45b489c-mh8vn" May 15 12:53:13.797231 kubelet[2697]: E0515 12:53:13.797172 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86b45b489c-mh8vn_calico-apiserver(a3c86fa8-07d6-4bd0-ba95-5246fc2365f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86b45b489c-mh8vn_calico-apiserver(a3c86fa8-07d6-4bd0-ba95-5246fc2365f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e1534ba90369619f8887eba3acd18e97c0ffd605c1f24b9f7431e3eeafe3218\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86b45b489c-mh8vn" podUID="a3c86fa8-07d6-4bd0-ba95-5246fc2365f5" May 15 12:53:13.814362 containerd[1555]: time="2025-05-15T12:53:13.814228096Z" level=error msg="Failed to destroy network for sandbox \"db782861df25dcdf64ecfb64fee0931339b58c4688a8da28be656ae2f7e4196c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:13.818308 containerd[1555]: time="2025-05-15T12:53:13.817517069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86d7c9bb-64dfc,Uid:9698ee50-755f-43e4-a451-771820b74a00,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db782861df25dcdf64ecfb64fee0931339b58c4688a8da28be656ae2f7e4196c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:13.817777 systemd[1]: run-netns-cni\x2d8a362d59\x2dcebc\x2d355b\x2d4334\x2d5ae17bd8b9e3.mount: Deactivated successfully. May 15 12:53:13.819794 kubelet[2697]: E0515 12:53:13.819713 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db782861df25dcdf64ecfb64fee0931339b58c4688a8da28be656ae2f7e4196c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 12:53:13.819915 kubelet[2697]: E0515 12:53:13.819842 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db782861df25dcdf64ecfb64fee0931339b58c4688a8da28be656ae2f7e4196c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d86d7c9bb-64dfc" May 15 12:53:13.819915 kubelet[2697]: E0515 12:53:13.819865 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db782861df25dcdf64ecfb64fee0931339b58c4688a8da28be656ae2f7e4196c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d86d7c9bb-64dfc" May 15 12:53:13.820289 kubelet[2697]: E0515 12:53:13.819933 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d86d7c9bb-64dfc_calico-apiserver(9698ee50-755f-43e4-a451-771820b74a00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d86d7c9bb-64dfc_calico-apiserver(9698ee50-755f-43e4-a451-771820b74a00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db782861df25dcdf64ecfb64fee0931339b58c4688a8da28be656ae2f7e4196c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d86d7c9bb-64dfc" podUID="9698ee50-755f-43e4-a451-771820b74a00" May 15 12:53:21.404061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068698914.mount: Deactivated successfully. May 15 12:53:21.437439 containerd[1555]: time="2025-05-15T12:53:21.437347218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:21.438390 containerd[1555]: time="2025-05-15T12:53:21.438334629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 12:53:21.439344 containerd[1555]: time="2025-05-15T12:53:21.439291369Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:21.440721 containerd[1555]: time="2025-05-15T12:53:21.440680320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:21.441172 containerd[1555]: time="2025-05-15T12:53:21.441129800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 9.236083821s" May 15 12:53:21.441214 containerd[1555]: time="2025-05-15T12:53:21.441175940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 15 12:53:21.460742 containerd[1555]: time="2025-05-15T12:53:21.460706752Z" level=info msg="CreateContainer within sandbox \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 12:53:21.467736 containerd[1555]: time="2025-05-15T12:53:21.467715816Z" level=info msg="Container 2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:21.478338 containerd[1555]: time="2025-05-15T12:53:21.478242522Z" level=info msg="CreateContainer within sandbox \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\"" May 15 12:53:21.478783 containerd[1555]: time="2025-05-15T12:53:21.478766472Z" level=info msg="StartContainer for \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\"" May 15 12:53:21.480760 containerd[1555]: time="2025-05-15T12:53:21.480714194Z" level=info msg="connecting to shim 2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499" address="unix:///run/containerd/s/1390c37b97e46f840f59fc912c9a1e991a1fef46e37e21eacbc9ed08e0c51bde" protocol=ttrpc version=3 May 15 12:53:21.510699 systemd[1]: Started cri-containerd-2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499.scope - libcontainer container 2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499. May 15 12:53:21.577827 containerd[1555]: time="2025-05-15T12:53:21.577798291Z" level=info msg="StartContainer for \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" returns successfully" May 15 12:53:21.661381 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 12:53:21.661524 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 12:53:22.228911 kubelet[2697]: E0515 12:53:22.228874 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:22.265589 kubelet[2697]: I0515 12:53:22.265097 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mvhbh" podStartSLOduration=1.9232641639999999 podStartE2EDuration="43.265076685s" podCreationTimestamp="2025-05-15 12:52:39 +0000 UTC" firstStartedPulling="2025-05-15 12:52:40.10082638 +0000 UTC m=+14.161984798" lastFinishedPulling="2025-05-15 12:53:21.442638901 +0000 UTC m=+55.503797319" observedRunningTime="2025-05-15 12:53:22.254746739 +0000 UTC m=+56.315905157" watchObservedRunningTime="2025-05-15 12:53:22.265076685 +0000 UTC m=+56.326235103" May 15 12:53:22.320032 containerd[1555]: time="2025-05-15T12:53:22.319977915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" id:\"2986db8930adbc41975eae2cc8376fe6cd5f0a85b2806e16f0f245af55c98e46\" pid:3735 exit_status:1 exited_at:{seconds:1747313602 nanos:319602525}" May 15 12:53:23.234356 kubelet[2697]: E0515 12:53:23.234309 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:23.387610 containerd[1555]: time="2025-05-15T12:53:23.386888569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" id:\"462d0004aec0c3e0858be61f6c2e1f4ddf90d36be12ab4a69e5db9ac0c69f063\" pid:3872 exit_status:1 exited_at:{seconds:1747313603 nanos:385886319}" May 15 12:53:23.599308 systemd-networkd[1458]: vxlan.calico: Link UP May 15 12:53:23.599316 systemd-networkd[1458]: vxlan.calico: Gained carrier May 15 12:53:24.028153 containerd[1555]: time="2025-05-15T12:53:24.027799770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86d7c9bb-64dfc,Uid:9698ee50-755f-43e4-a451-771820b74a00,Namespace:calico-apiserver,Attempt:0,}" May 15 12:53:24.197361 systemd-networkd[1458]: cali078dfec57ab: Link UP May 15 12:53:24.197658 systemd-networkd[1458]: cali078dfec57ab: Gained carrier May 15 12:53:24.220920 containerd[1555]: 2025-05-15 12:53:24.103 [INFO][3960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0 calico-apiserver-5d86d7c9bb- calico-apiserver 9698ee50-755f-43e4-a451-771820b74a00 772 0 2025-05-15 12:52:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d86d7c9bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-126-108 calico-apiserver-5d86d7c9bb-64dfc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali078dfec57ab [] []}} ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-64dfc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-" May 15 12:53:24.220920 containerd[1555]: 2025-05-15 12:53:24.103 [INFO][3960] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-64dfc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:53:24.220920 containerd[1555]: 2025-05-15 12:53:24.144 [INFO][3972] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" HandleID="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:53:24.221130 containerd[1555]: 2025-05-15 12:53:24.155 [INFO][3972] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" HandleID="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310f20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-236-126-108", "pod":"calico-apiserver-5d86d7c9bb-64dfc", "timestamp":"2025-05-15 12:53:24.144417726 +0000 UTC"}, Hostname:"172-236-126-108", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:53:24.221130 containerd[1555]: 2025-05-15 12:53:24.155 [INFO][3972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:53:24.221130 containerd[1555]: 2025-05-15 12:53:24.155 [INFO][3972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:53:24.221130 containerd[1555]: 2025-05-15 12:53:24.155 [INFO][3972] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-126-108' May 15 12:53:24.221130 containerd[1555]: 2025-05-15 12:53:24.157 [INFO][3972] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" host="172-236-126-108" May 15 12:53:24.221130 containerd[1555]: 2025-05-15 12:53:24.166 [INFO][3972] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-126-108" May 15 12:53:24.221130 containerd[1555]: 2025-05-15 12:53:24.170 [INFO][3972] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="172-236-126-108" May 15 12:53:24.221130 containerd[1555]: 2025-05-15 12:53:24.172 [INFO][3972] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:24.221130 containerd[1555]: 2025-05-15 12:53:24.174 [INFO][3972] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:24.221380 containerd[1555]: 2025-05-15 12:53:24.174 [INFO][3972] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" host="172-236-126-108" May 15 12:53:24.221380 containerd[1555]: 2025-05-15 12:53:24.176 [INFO][3972] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598 May 15 12:53:24.221380 containerd[1555]: 2025-05-15 12:53:24.181 [INFO][3972] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" host="172-236-126-108" May 15 12:53:24.221380 containerd[1555]: 2025-05-15 12:53:24.186 [INFO][3972] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.129/26] block=192.168.62.128/26 handle="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" host="172-236-126-108" May 15 12:53:24.221380 containerd[1555]: 2025-05-15 12:53:24.186 [INFO][3972] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.129/26] handle="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" host="172-236-126-108" May 15 12:53:24.221380 containerd[1555]: 2025-05-15 12:53:24.187 [INFO][3972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:53:24.221380 containerd[1555]: 2025-05-15 12:53:24.187 [INFO][3972] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.129/26] IPv6=[] ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" HandleID="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:53:24.221514 containerd[1555]: 2025-05-15 12:53:24.192 [INFO][3960] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-64dfc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0", GenerateName:"calico-apiserver-5d86d7c9bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9698ee50-755f-43e4-a451-771820b74a00", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d86d7c9bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"", Pod:"calico-apiserver-5d86d7c9bb-64dfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali078dfec57ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:24.222054 containerd[1555]: 2025-05-15 12:53:24.192 [INFO][3960] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.129/32] ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-64dfc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:53:24.222054 containerd[1555]: 2025-05-15 12:53:24.192 [INFO][3960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali078dfec57ab ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-64dfc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:53:24.222054 containerd[1555]: 2025-05-15 12:53:24.197 [INFO][3960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-64dfc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:53:24.222148 containerd[1555]: 2025-05-15 12:53:24.198 [INFO][3960] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-64dfc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0", GenerateName:"calico-apiserver-5d86d7c9bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9698ee50-755f-43e4-a451-771820b74a00", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d86d7c9bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598", Pod:"calico-apiserver-5d86d7c9bb-64dfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali078dfec57ab", MAC:"5a:16:bb:63:86:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:24.222198 containerd[1555]: 2025-05-15 12:53:24.217 [INFO][3960] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-64dfc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:53:24.270931 containerd[1555]: time="2025-05-15T12:53:24.270850767Z" level=info msg="connecting to shim d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" address="unix:///run/containerd/s/b3afe7eded49235f0e7c109b92cbc9fe6885e8a6cbfd1ee610878633394d4370" namespace=k8s.io protocol=ttrpc version=3 May 15 12:53:24.305711 systemd[1]: Started cri-containerd-d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598.scope - libcontainer container d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598. May 15 12:53:24.361253 containerd[1555]: time="2025-05-15T12:53:24.361146771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86d7c9bb-64dfc,Uid:9698ee50-755f-43e4-a451-771820b74a00,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\"" May 15 12:53:24.364569 containerd[1555]: time="2025-05-15T12:53:24.364435743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 12:53:25.025673 kubelet[2697]: E0515 12:53:25.025632 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:25.026530 containerd[1555]: time="2025-05-15T12:53:25.026407862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grpr6,Uid:fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe,Namespace:kube-system,Attempt:0,}" May 15 12:53:25.155207 systemd-networkd[1458]: cali6169eba3856: Link UP May 15 12:53:25.156896 systemd-networkd[1458]: cali6169eba3856: Gained carrier May 15 12:53:25.177683 containerd[1555]: 2025-05-15 12:53:25.074 [INFO][4038] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-eth0 coredns-668d6bf9bc- kube-system fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe 773 0 2025-05-15 12:52:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-126-108 coredns-668d6bf9bc-grpr6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6169eba3856 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" Namespace="kube-system" Pod="coredns-668d6bf9bc-grpr6" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-" May 15 12:53:25.177683 containerd[1555]: 2025-05-15 12:53:25.074 [INFO][4038] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" Namespace="kube-system" Pod="coredns-668d6bf9bc-grpr6" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-eth0" May 15 12:53:25.177683 containerd[1555]: 2025-05-15 12:53:25.111 [INFO][4049] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" HandleID="k8s-pod-network.db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" Workload="172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-eth0" May 15 12:53:25.178001 containerd[1555]: 2025-05-15 12:53:25.121 [INFO][4049] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" HandleID="k8s-pod-network.db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" Workload="172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042bc10), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-126-108", "pod":"coredns-668d6bf9bc-grpr6", "timestamp":"2025-05-15 12:53:25.111804331 +0000 UTC"}, Hostname:"172-236-126-108", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:53:25.178001 containerd[1555]: 2025-05-15 12:53:25.121 [INFO][4049] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:53:25.178001 containerd[1555]: 2025-05-15 12:53:25.121 [INFO][4049] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:53:25.178001 containerd[1555]: 2025-05-15 12:53:25.121 [INFO][4049] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-126-108' May 15 12:53:25.178001 containerd[1555]: 2025-05-15 12:53:25.123 [INFO][4049] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" host="172-236-126-108" May 15 12:53:25.178001 containerd[1555]: 2025-05-15 12:53:25.127 [INFO][4049] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-126-108" May 15 12:53:25.178001 containerd[1555]: 2025-05-15 12:53:25.132 [INFO][4049] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="172-236-126-108" May 15 12:53:25.178001 containerd[1555]: 2025-05-15 12:53:25.134 [INFO][4049] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:25.178001 containerd[1555]: 2025-05-15 12:53:25.136 [INFO][4049] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:25.178001 containerd[1555]: 2025-05-15 12:53:25.136 [INFO][4049] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" host="172-236-126-108" May 15 12:53:25.178220 containerd[1555]: 2025-05-15 12:53:25.138 [INFO][4049] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498 May 15 12:53:25.178220 containerd[1555]: 2025-05-15 12:53:25.142 [INFO][4049] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" host="172-236-126-108" May 15 12:53:25.178220 containerd[1555]: 2025-05-15 12:53:25.147 [INFO][4049] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.130/26] block=192.168.62.128/26 handle="k8s-pod-network.db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" host="172-236-126-108" May 15 12:53:25.178220 containerd[1555]: 2025-05-15 12:53:25.147 [INFO][4049] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.130/26] handle="k8s-pod-network.db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" host="172-236-126-108" May 15 12:53:25.178220 containerd[1555]: 2025-05-15 12:53:25.147 [INFO][4049] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:53:25.178220 containerd[1555]: 2025-05-15 12:53:25.147 [INFO][4049] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.130/26] IPv6=[] ContainerID="db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" HandleID="k8s-pod-network.db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" Workload="172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-eth0" May 15 12:53:25.178334 containerd[1555]: 2025-05-15 12:53:25.151 [INFO][4038] cni-plugin/k8s.go 386: Populated endpoint ContainerID="db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" Namespace="kube-system" Pod="coredns-668d6bf9bc-grpr6" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"", Pod:"coredns-668d6bf9bc-grpr6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6169eba3856", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:25.178396 containerd[1555]: 2025-05-15 12:53:25.151 [INFO][4038] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.130/32] ContainerID="db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" Namespace="kube-system" Pod="coredns-668d6bf9bc-grpr6" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-eth0" May 15 12:53:25.178396 containerd[1555]: 2025-05-15 12:53:25.151 [INFO][4038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6169eba3856 ContainerID="db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" Namespace="kube-system" Pod="coredns-668d6bf9bc-grpr6" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-eth0" May 15 12:53:25.178396 containerd[1555]: 2025-05-15 12:53:25.157 [INFO][4038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" Namespace="kube-system" Pod="coredns-668d6bf9bc-grpr6" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-eth0" May 15 12:53:25.178481 containerd[1555]: 2025-05-15 12:53:25.158 [INFO][4038] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" Namespace="kube-system" Pod="coredns-668d6bf9bc-grpr6" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498", Pod:"coredns-668d6bf9bc-grpr6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6169eba3856", MAC:"8e:31:f3:72:d5:91", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:25.178481 containerd[1555]: 2025-05-15 12:53:25.175 [INFO][4038] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" Namespace="kube-system" Pod="coredns-668d6bf9bc-grpr6" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--grpr6-eth0" May 15 12:53:25.206458 containerd[1555]: time="2025-05-15T12:53:25.206354654Z" level=info msg="connecting to shim db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498" address="unix:///run/containerd/s/a36a3c947ad3e43a67088e95320701424c7330a2d470a84d392897eb10aac979" namespace=k8s.io protocol=ttrpc version=3 May 15 12:53:25.239766 systemd[1]: Started cri-containerd-db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498.scope - libcontainer container db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498. May 15 12:53:25.292472 containerd[1555]: time="2025-05-15T12:53:25.292354183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grpr6,Uid:fac0c8a1-1f04-47ec-86ff-e7aca09e7dbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498\"" May 15 12:53:25.293830 kubelet[2697]: E0515 12:53:25.293779 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:25.299225 containerd[1555]: time="2025-05-15T12:53:25.299180686Z" level=info msg="CreateContainer within sandbox \"db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:53:25.310510 containerd[1555]: time="2025-05-15T12:53:25.310422541Z" level=info msg="Container 3418881682846ea5827bee45c44929cffefaafc0f5bb8dc5babea97029cb3fc3: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:25.314144 systemd-networkd[1458]: cali078dfec57ab: Gained IPv6LL May 15 12:53:25.318651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481525273.mount: Deactivated successfully. May 15 12:53:25.322633 containerd[1555]: time="2025-05-15T12:53:25.322569107Z" level=info msg="CreateContainer within sandbox \"db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3418881682846ea5827bee45c44929cffefaafc0f5bb8dc5babea97029cb3fc3\"" May 15 12:53:25.324458 containerd[1555]: time="2025-05-15T12:53:25.324135207Z" level=info msg="StartContainer for \"3418881682846ea5827bee45c44929cffefaafc0f5bb8dc5babea97029cb3fc3\"" May 15 12:53:25.325897 containerd[1555]: time="2025-05-15T12:53:25.325827708Z" level=info msg="connecting to shim 3418881682846ea5827bee45c44929cffefaafc0f5bb8dc5babea97029cb3fc3" address="unix:///run/containerd/s/a36a3c947ad3e43a67088e95320701424c7330a2d470a84d392897eb10aac979" protocol=ttrpc version=3 May 15 12:53:25.351761 systemd[1]: Started cri-containerd-3418881682846ea5827bee45c44929cffefaafc0f5bb8dc5babea97029cb3fc3.scope - libcontainer container 3418881682846ea5827bee45c44929cffefaafc0f5bb8dc5babea97029cb3fc3. May 15 12:53:25.377711 systemd-networkd[1458]: vxlan.calico: Gained IPv6LL May 15 12:53:25.409079 containerd[1555]: time="2025-05-15T12:53:25.408982136Z" level=info msg="StartContainer for \"3418881682846ea5827bee45c44929cffefaafc0f5bb8dc5babea97029cb3fc3\" returns successfully" May 15 12:53:26.245700 kubelet[2697]: E0515 12:53:26.244831 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:26.259084 kubelet[2697]: I0515 12:53:26.258482 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-grpr6" podStartSLOduration=55.258464484 podStartE2EDuration="55.258464484s" podCreationTimestamp="2025-05-15 12:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:53:26.256819663 +0000 UTC m=+60.317978091" watchObservedRunningTime="2025-05-15 12:53:26.258464484 +0000 UTC m=+60.319622902" May 15 12:53:26.593356 systemd-networkd[1458]: cali6169eba3856: Gained IPv6LL May 15 12:53:27.026742 containerd[1555]: time="2025-05-15T12:53:27.026582050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699d85858d-pssr6,Uid:76049a04-26ee-4fa9-afd5-5ad317529d27,Namespace:calico-system,Attempt:0,}" May 15 12:53:27.027821 containerd[1555]: time="2025-05-15T12:53:27.027781390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nq42m,Uid:c0d8dc71-c387-4c70-bebd-31f74a7e6218,Namespace:calico-system,Attempt:0,}" May 15 12:53:27.171627 systemd-networkd[1458]: cali064fbecee0e: Link UP May 15 12:53:27.171854 systemd-networkd[1458]: cali064fbecee0e: Gained carrier May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.081 [INFO][4155] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--126--108-k8s-csi--node--driver--nq42m-eth0 csi-node-driver- calico-system c0d8dc71-c387-4c70-bebd-31f74a7e6218 600 0 2025-05-15 12:52:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-236-126-108 csi-node-driver-nq42m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali064fbecee0e [] []}} ContainerID="0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" Namespace="calico-system" Pod="csi-node-driver-nq42m" WorkloadEndpoint="172--236--126--108-k8s-csi--node--driver--nq42m-" May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.081 [INFO][4155] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" Namespace="calico-system" Pod="csi-node-driver-nq42m" WorkloadEndpoint="172--236--126--108-k8s-csi--node--driver--nq42m-eth0" May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.118 [INFO][4183] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" HandleID="k8s-pod-network.0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" Workload="172--236--126--108-k8s-csi--node--driver--nq42m-eth0" May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.131 [INFO][4183] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" HandleID="k8s-pod-network.0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" Workload="172--236--126--108-k8s-csi--node--driver--nq42m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b730), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-126-108", "pod":"csi-node-driver-nq42m", "timestamp":"2025-05-15 12:53:27.118971417 +0000 UTC"}, Hostname:"172-236-126-108", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.131 [INFO][4183] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.131 [INFO][4183] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.131 [INFO][4183] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-126-108' May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.133 [INFO][4183] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" host="172-236-126-108" May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.137 [INFO][4183] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-126-108" May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.146 [INFO][4183] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="172-236-126-108" May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.148 [INFO][4183] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.150 [INFO][4183] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.150 [INFO][4183] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" host="172-236-126-108" May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.151 [INFO][4183] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.154 [INFO][4183] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" host="172-236-126-108" May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.160 [INFO][4183] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.131/26] block=192.168.62.128/26 handle="k8s-pod-network.0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" host="172-236-126-108" May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.160 [INFO][4183] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.131/26] handle="k8s-pod-network.0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" host="172-236-126-108" May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.160 [INFO][4183] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:53:27.191295 containerd[1555]: 2025-05-15 12:53:27.160 [INFO][4183] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.131/26] IPv6=[] ContainerID="0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" HandleID="k8s-pod-network.0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" Workload="172--236--126--108-k8s-csi--node--driver--nq42m-eth0" May 15 12:53:27.192381 containerd[1555]: 2025-05-15 12:53:27.164 [INFO][4155] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" Namespace="calico-system" Pod="csi-node-driver-nq42m" WorkloadEndpoint="172--236--126--108-k8s-csi--node--driver--nq42m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-csi--node--driver--nq42m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0d8dc71-c387-4c70-bebd-31f74a7e6218", ResourceVersion:"600", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"", Pod:"csi-node-driver-nq42m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali064fbecee0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:27.192381 containerd[1555]: 2025-05-15 12:53:27.165 [INFO][4155] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.131/32] ContainerID="0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" Namespace="calico-system" Pod="csi-node-driver-nq42m" WorkloadEndpoint="172--236--126--108-k8s-csi--node--driver--nq42m-eth0" May 15 12:53:27.192381 containerd[1555]: 2025-05-15 12:53:27.165 [INFO][4155] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali064fbecee0e ContainerID="0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" Namespace="calico-system" Pod="csi-node-driver-nq42m" WorkloadEndpoint="172--236--126--108-k8s-csi--node--driver--nq42m-eth0" May 15 12:53:27.192381 containerd[1555]: 2025-05-15 12:53:27.172 [INFO][4155] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" Namespace="calico-system" Pod="csi-node-driver-nq42m" WorkloadEndpoint="172--236--126--108-k8s-csi--node--driver--nq42m-eth0" May 15 12:53:27.192381 containerd[1555]: 2025-05-15 12:53:27.172 [INFO][4155] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" Namespace="calico-system" Pod="csi-node-driver-nq42m" WorkloadEndpoint="172--236--126--108-k8s-csi--node--driver--nq42m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-csi--node--driver--nq42m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0d8dc71-c387-4c70-bebd-31f74a7e6218", ResourceVersion:"600", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d", Pod:"csi-node-driver-nq42m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali064fbecee0e", MAC:"36:05:d6:01:74:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:27.192381 containerd[1555]: 2025-05-15 12:53:27.188 [INFO][4155] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" Namespace="calico-system" Pod="csi-node-driver-nq42m" WorkloadEndpoint="172--236--126--108-k8s-csi--node--driver--nq42m-eth0" May 15 12:53:27.221592 containerd[1555]: time="2025-05-15T12:53:27.221395787Z" level=info msg="connecting to shim 0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d" address="unix:///run/containerd/s/4080a192d1ac6026d9f2c3ce752d67b0e9498b55f2e474ddcb835928b6e83335" namespace=k8s.io protocol=ttrpc version=3 May 15 12:53:27.250950 kubelet[2697]: E0515 12:53:27.250880 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:27.263673 systemd[1]: Started cri-containerd-0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d.scope - libcontainer container 0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d. May 15 12:53:27.298940 systemd-networkd[1458]: calic0bbf0a6347: Link UP May 15 12:53:27.314031 systemd-networkd[1458]: calic0bbf0a6347: Gained carrier May 15 12:53:27.328336 containerd[1555]: time="2025-05-15T12:53:27.327490210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nq42m,Uid:c0d8dc71-c387-4c70-bebd-31f74a7e6218,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d\"" May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.078 [INFO][4156] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0 calico-kube-controllers-699d85858d- calico-system 76049a04-26ee-4fa9-afd5-5ad317529d27 771 0 2025-05-15 12:52:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:699d85858d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-236-126-108 calico-kube-controllers-699d85858d-pssr6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic0bbf0a6347 [] []}} ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Namespace="calico-system" Pod="calico-kube-controllers-699d85858d-pssr6" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-" May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.079 [INFO][4156] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Namespace="calico-system" Pod="calico-kube-controllers-699d85858d-pssr6" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.119 [INFO][4181] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" HandleID="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Workload="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.132 [INFO][4181] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" HandleID="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Workload="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003192f0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-126-108", "pod":"calico-kube-controllers-699d85858d-pssr6", "timestamp":"2025-05-15 12:53:27.119856897 +0000 UTC"}, Hostname:"172-236-126-108", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.132 [INFO][4181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.161 [INFO][4181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.161 [INFO][4181] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-126-108' May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.236 [INFO][4181] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" host="172-236-126-108" May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.247 [INFO][4181] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-126-108" May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.261 [INFO][4181] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="172-236-126-108" May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.264 [INFO][4181] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.269 [INFO][4181] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.269 [INFO][4181] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" host="172-236-126-108" May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.271 [INFO][4181] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303 May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.277 [INFO][4181] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" host="172-236-126-108" May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.284 [INFO][4181] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.132/26] block=192.168.62.128/26 handle="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" host="172-236-126-108" May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.284 [INFO][4181] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.132/26] handle="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" host="172-236-126-108" May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.284 [INFO][4181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:53:27.339389 containerd[1555]: 2025-05-15 12:53:27.284 [INFO][4181] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.132/26] IPv6=[] ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" HandleID="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Workload="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:53:27.340323 containerd[1555]: 2025-05-15 12:53:27.289 [INFO][4156] cni-plugin/k8s.go 386: Populated endpoint ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Namespace="calico-system" Pod="calico-kube-controllers-699d85858d-pssr6" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0", GenerateName:"calico-kube-controllers-699d85858d-", Namespace:"calico-system", SelfLink:"", UID:"76049a04-26ee-4fa9-afd5-5ad317529d27", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699d85858d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"", Pod:"calico-kube-controllers-699d85858d-pssr6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic0bbf0a6347", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:27.340323 containerd[1555]: 2025-05-15 12:53:27.289 [INFO][4156] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.132/32] ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Namespace="calico-system" Pod="calico-kube-controllers-699d85858d-pssr6" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:53:27.340323 containerd[1555]: 2025-05-15 12:53:27.289 [INFO][4156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0bbf0a6347 ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Namespace="calico-system" Pod="calico-kube-controllers-699d85858d-pssr6" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:53:27.340323 containerd[1555]: 2025-05-15 12:53:27.315 [INFO][4156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Namespace="calico-system" Pod="calico-kube-controllers-699d85858d-pssr6" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:53:27.340323 containerd[1555]: 2025-05-15 12:53:27.316 [INFO][4156] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Namespace="calico-system" Pod="calico-kube-controllers-699d85858d-pssr6" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0", GenerateName:"calico-kube-controllers-699d85858d-", Namespace:"calico-system", SelfLink:"", UID:"76049a04-26ee-4fa9-afd5-5ad317529d27", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699d85858d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303", Pod:"calico-kube-controllers-699d85858d-pssr6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic0bbf0a6347", MAC:"22:4d:16:39:07:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:27.340323 containerd[1555]: 2025-05-15 12:53:27.333 [INFO][4156] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Namespace="calico-system" Pod="calico-kube-controllers-699d85858d-pssr6" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:53:27.368332 containerd[1555]: time="2025-05-15T12:53:27.368288056Z" level=info msg="connecting to shim 19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" address="unix:///run/containerd/s/2b34978c0145459aa47dabef8de74dbef959fb7a717376ccb7155704e19bda5d" namespace=k8s.io protocol=ttrpc version=3 May 15 12:53:27.400701 systemd[1]: Started cri-containerd-19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303.scope - libcontainer container 19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303. May 15 12:53:27.458828 containerd[1555]: time="2025-05-15T12:53:27.458784792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699d85858d-pssr6,Uid:76049a04-26ee-4fa9-afd5-5ad317529d27,Namespace:calico-system,Attempt:0,} returns sandbox id \"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\"" May 15 12:53:28.028339 kubelet[2697]: E0515 12:53:28.026312 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:28.028518 containerd[1555]: time="2025-05-15T12:53:28.027885198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jq7vf,Uid:47aeb2aa-cfcd-4701-8f9f-c898edfab234,Namespace:kube-system,Attempt:0,}" May 15 12:53:28.028518 containerd[1555]: time="2025-05-15T12:53:28.028266399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86d7c9bb-95bdc,Uid:72f594de-0445-4674-8b32-ccb3305262a8,Namespace:calico-apiserver,Attempt:0,}" May 15 12:53:28.177051 systemd-networkd[1458]: cali271a8e7bcbb: Link UP May 15 12:53:28.177838 systemd-networkd[1458]: cali271a8e7bcbb: Gained carrier May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.086 [INFO][4324] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-eth0 coredns-668d6bf9bc- kube-system 47aeb2aa-cfcd-4701-8f9f-c898edfab234 766 0 2025-05-15 12:52:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-126-108 coredns-668d6bf9bc-jq7vf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali271a8e7bcbb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq7vf" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-" May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.086 [INFO][4324] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq7vf" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-eth0" May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.122 [INFO][4346] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" HandleID="k8s-pod-network.a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" Workload="172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-eth0" May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.134 [INFO][4346] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" HandleID="k8s-pod-network.a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" Workload="172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002907a0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-126-108", "pod":"coredns-668d6bf9bc-jq7vf", "timestamp":"2025-05-15 12:53:28.122228404 +0000 UTC"}, Hostname:"172-236-126-108", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.134 [INFO][4346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.134 [INFO][4346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.134 [INFO][4346] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-126-108' May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.138 [INFO][4346] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" host="172-236-126-108" May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.145 [INFO][4346] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-126-108" May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.150 [INFO][4346] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="172-236-126-108" May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.152 [INFO][4346] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.155 [INFO][4346] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.155 [INFO][4346] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" host="172-236-126-108" May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.156 [INFO][4346] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.161 [INFO][4346] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" host="172-236-126-108" May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.169 [INFO][4346] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.133/26] block=192.168.62.128/26 handle="k8s-pod-network.a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" host="172-236-126-108" May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.169 [INFO][4346] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.133/26] handle="k8s-pod-network.a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" host="172-236-126-108" May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.169 [INFO][4346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:53:28.196658 containerd[1555]: 2025-05-15 12:53:28.170 [INFO][4346] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.133/26] IPv6=[] ContainerID="a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" HandleID="k8s-pod-network.a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" Workload="172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-eth0" May 15 12:53:28.197182 containerd[1555]: 2025-05-15 12:53:28.172 [INFO][4324] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq7vf" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"47aeb2aa-cfcd-4701-8f9f-c898edfab234", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"", Pod:"coredns-668d6bf9bc-jq7vf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali271a8e7bcbb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:28.197182 containerd[1555]: 2025-05-15 12:53:28.173 [INFO][4324] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.133/32] ContainerID="a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq7vf" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-eth0" May 15 12:53:28.197182 containerd[1555]: 2025-05-15 12:53:28.173 [INFO][4324] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali271a8e7bcbb ContainerID="a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq7vf" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-eth0" May 15 12:53:28.197182 containerd[1555]: 2025-05-15 12:53:28.178 [INFO][4324] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq7vf" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-eth0" May 15 12:53:28.197182 containerd[1555]: 2025-05-15 12:53:28.178 [INFO][4324] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq7vf" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"47aeb2aa-cfcd-4701-8f9f-c898edfab234", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a", Pod:"coredns-668d6bf9bc-jq7vf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali271a8e7bcbb", MAC:"b2:12:f3:66:36:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:28.197182 containerd[1555]: 2025-05-15 12:53:28.193 [INFO][4324] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" Namespace="kube-system" Pod="coredns-668d6bf9bc-jq7vf" WorkloadEndpoint="172--236--126--108-k8s-coredns--668d6bf9bc--jq7vf-eth0" May 15 12:53:28.222170 containerd[1555]: time="2025-05-15T12:53:28.221899311Z" level=info msg="connecting to shim a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a" address="unix:///run/containerd/s/a8bfaaa0e7dedf661ac04d4086935eb31a0352e840819d16cdf99b5a993f843a" namespace=k8s.io protocol=ttrpc version=3 May 15 12:53:28.253971 systemd[1]: Started cri-containerd-a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a.scope - libcontainer container a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a. May 15 12:53:28.263456 kubelet[2697]: E0515 12:53:28.263310 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:28.294085 systemd-networkd[1458]: cali7b382c0b257: Link UP May 15 12:53:28.296094 systemd-networkd[1458]: cali7b382c0b257: Gained carrier May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.088 [INFO][4318] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0 calico-apiserver-5d86d7c9bb- calico-apiserver 72f594de-0445-4674-8b32-ccb3305262a8 770 0 2025-05-15 12:52:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d86d7c9bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-126-108 calico-apiserver-5d86d7c9bb-95bdc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7b382c0b257 [] []}} ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-95bdc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-" May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.089 [INFO][4318] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-95bdc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.125 [INFO][4351] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" HandleID="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.143 [INFO][4351] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" HandleID="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-236-126-108", "pod":"calico-apiserver-5d86d7c9bb-95bdc", "timestamp":"2025-05-15 12:53:28.125462015 +0000 UTC"}, Hostname:"172-236-126-108", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.143 [INFO][4351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.169 [INFO][4351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.169 [INFO][4351] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-126-108' May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.239 [INFO][4351] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" host="172-236-126-108" May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.244 [INFO][4351] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-126-108" May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.253 [INFO][4351] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="172-236-126-108" May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.257 [INFO][4351] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.260 [INFO][4351] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.260 [INFO][4351] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" host="172-236-126-108" May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.263 [INFO][4351] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755 May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.271 [INFO][4351] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" host="172-236-126-108" May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.283 [INFO][4351] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.134/26] block=192.168.62.128/26 handle="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" host="172-236-126-108" May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.283 [INFO][4351] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.134/26] handle="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" host="172-236-126-108" May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.283 [INFO][4351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:53:28.317170 containerd[1555]: 2025-05-15 12:53:28.283 [INFO][4351] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.134/26] IPv6=[] ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" HandleID="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:53:28.319696 containerd[1555]: 2025-05-15 12:53:28.286 [INFO][4318] cni-plugin/k8s.go 386: Populated endpoint ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-95bdc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0", GenerateName:"calico-apiserver-5d86d7c9bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"72f594de-0445-4674-8b32-ccb3305262a8", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d86d7c9bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"", Pod:"calico-apiserver-5d86d7c9bb-95bdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b382c0b257", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:28.319696 containerd[1555]: 2025-05-15 12:53:28.286 [INFO][4318] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.134/32] ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-95bdc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:53:28.319696 containerd[1555]: 2025-05-15 12:53:28.286 [INFO][4318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b382c0b257 ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-95bdc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:53:28.319696 containerd[1555]: 2025-05-15 12:53:28.296 [INFO][4318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-95bdc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:53:28.319696 containerd[1555]: 2025-05-15 12:53:28.297 [INFO][4318] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-95bdc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0", GenerateName:"calico-apiserver-5d86d7c9bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"72f594de-0445-4674-8b32-ccb3305262a8", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d86d7c9bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755", Pod:"calico-apiserver-5d86d7c9bb-95bdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b382c0b257", MAC:"8a:50:4f:7a:79:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:28.319696 containerd[1555]: 2025-05-15 12:53:28.314 [INFO][4318] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Namespace="calico-apiserver" Pod="calico-apiserver-5d86d7c9bb-95bdc" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:53:28.367374 containerd[1555]: time="2025-05-15T12:53:28.367010365Z" level=info msg="connecting to shim 52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" address="unix:///run/containerd/s/08f278585de8e08da7974b69a7ed611216e516061ce9ef231da78429c0fab9b6" namespace=k8s.io protocol=ttrpc version=3 May 15 12:53:28.368914 containerd[1555]: time="2025-05-15T12:53:28.368894786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jq7vf,Uid:47aeb2aa-cfcd-4701-8f9f-c898edfab234,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a\"" May 15 12:53:28.370104 kubelet[2697]: E0515 12:53:28.370085 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:28.373764 containerd[1555]: time="2025-05-15T12:53:28.373744168Z" level=info msg="CreateContainer within sandbox \"a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 12:53:28.386650 containerd[1555]: time="2025-05-15T12:53:28.386593763Z" level=info msg="Container dca997b8e044dfd6dd2e54856758785b557ba596c62d4a526aeaebbcbf9720ff: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:28.393800 containerd[1555]: time="2025-05-15T12:53:28.393778815Z" level=info msg="CreateContainer within sandbox \"a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dca997b8e044dfd6dd2e54856758785b557ba596c62d4a526aeaebbcbf9720ff\"" May 15 12:53:28.395127 containerd[1555]: time="2025-05-15T12:53:28.395073716Z" level=info msg="StartContainer for \"dca997b8e044dfd6dd2e54856758785b557ba596c62d4a526aeaebbcbf9720ff\"" May 15 12:53:28.399858 containerd[1555]: time="2025-05-15T12:53:28.399805058Z" level=info msg="connecting to shim dca997b8e044dfd6dd2e54856758785b557ba596c62d4a526aeaebbcbf9720ff" address="unix:///run/containerd/s/a8bfaaa0e7dedf661ac04d4086935eb31a0352e840819d16cdf99b5a993f843a" protocol=ttrpc version=3 May 15 12:53:28.409691 systemd[1]: Started cri-containerd-52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755.scope - libcontainer container 52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755. May 15 12:53:28.436692 systemd[1]: Started cri-containerd-dca997b8e044dfd6dd2e54856758785b557ba596c62d4a526aeaebbcbf9720ff.scope - libcontainer container dca997b8e044dfd6dd2e54856758785b557ba596c62d4a526aeaebbcbf9720ff. May 15 12:53:28.448904 systemd-networkd[1458]: cali064fbecee0e: Gained IPv6LL May 15 12:53:28.497456 containerd[1555]: time="2025-05-15T12:53:28.497013204Z" level=info msg="StartContainer for \"dca997b8e044dfd6dd2e54856758785b557ba596c62d4a526aeaebbcbf9720ff\" returns successfully" May 15 12:53:28.524849 containerd[1555]: time="2025-05-15T12:53:28.524719124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d86d7c9bb-95bdc,Uid:72f594de-0445-4674-8b32-ccb3305262a8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\"" May 15 12:53:28.960902 systemd-networkd[1458]: calic0bbf0a6347: Gained IPv6LL May 15 12:53:29.026257 containerd[1555]: time="2025-05-15T12:53:29.026189911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b45b489c-mh8vn,Uid:a3c86fa8-07d6-4bd0-ba95-5246fc2365f5,Namespace:calico-apiserver,Attempt:0,}" May 15 12:53:29.144374 systemd-networkd[1458]: cali4b778d3f5c4: Link UP May 15 12:53:29.145424 systemd-networkd[1458]: cali4b778d3f5c4: Gained carrier May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.068 [INFO][4516] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-eth0 calico-apiserver-86b45b489c- calico-apiserver a3c86fa8-07d6-4bd0-ba95-5246fc2365f5 769 0 2025-05-15 12:52:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86b45b489c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-126-108 calico-apiserver-86b45b489c-mh8vn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4b778d3f5c4 [] []}} ContainerID="3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-mh8vn" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-" May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.068 [INFO][4516] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-mh8vn" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-eth0" May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.101 [INFO][4528] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" HandleID="k8s-pod-network.3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" Workload="172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-eth0" May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.111 [INFO][4528] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" HandleID="k8s-pod-network.3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" Workload="172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003adb60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-236-126-108", "pod":"calico-apiserver-86b45b489c-mh8vn", "timestamp":"2025-05-15 12:53:29.101049587 +0000 UTC"}, Hostname:"172-236-126-108", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.111 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.111 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.111 [INFO][4528] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-126-108' May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.114 [INFO][4528] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" host="172-236-126-108" May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.117 [INFO][4528] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-126-108" May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.122 [INFO][4528] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="172-236-126-108" May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.123 [INFO][4528] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.126 [INFO][4528] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.126 [INFO][4528] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" host="172-236-126-108" May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.127 [INFO][4528] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.131 [INFO][4528] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" host="172-236-126-108" May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.137 [INFO][4528] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.135/26] block=192.168.62.128/26 handle="k8s-pod-network.3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" host="172-236-126-108" May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.137 [INFO][4528] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.135/26] handle="k8s-pod-network.3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" host="172-236-126-108" May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.137 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:53:29.168701 containerd[1555]: 2025-05-15 12:53:29.137 [INFO][4528] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.135/26] IPv6=[] ContainerID="3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" HandleID="k8s-pod-network.3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" Workload="172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-eth0" May 15 12:53:29.169672 containerd[1555]: 2025-05-15 12:53:29.140 [INFO][4516] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-mh8vn" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-eth0", GenerateName:"calico-apiserver-86b45b489c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a3c86fa8-07d6-4bd0-ba95-5246fc2365f5", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86b45b489c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"", Pod:"calico-apiserver-86b45b489c-mh8vn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b778d3f5c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:29.169672 containerd[1555]: 2025-05-15 12:53:29.140 [INFO][4516] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.135/32] ContainerID="3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-mh8vn" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-eth0" May 15 12:53:29.169672 containerd[1555]: 2025-05-15 12:53:29.140 [INFO][4516] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b778d3f5c4 ContainerID="3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-mh8vn" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-eth0" May 15 12:53:29.169672 containerd[1555]: 2025-05-15 12:53:29.146 [INFO][4516] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-mh8vn" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-eth0" May 15 12:53:29.169672 containerd[1555]: 2025-05-15 12:53:29.146 [INFO][4516] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-mh8vn" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-eth0", GenerateName:"calico-apiserver-86b45b489c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a3c86fa8-07d6-4bd0-ba95-5246fc2365f5", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86b45b489c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f", Pod:"calico-apiserver-86b45b489c-mh8vn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b778d3f5c4", MAC:"de:b2:31:c1:d0:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:29.169672 containerd[1555]: 2025-05-15 12:53:29.158 [INFO][4516] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-mh8vn" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--mh8vn-eth0" May 15 12:53:29.198380 containerd[1555]: time="2025-05-15T12:53:29.198338592Z" level=info msg="connecting to shim 3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f" address="unix:///run/containerd/s/8a8311e0e294f14a66e2dcb56fb5c532b9bb94918ba2ae6ad409ed3933fe23ad" namespace=k8s.io protocol=ttrpc version=3 May 15 12:53:29.231681 systemd[1]: Started cri-containerd-3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f.scope - libcontainer container 3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f. May 15 12:53:29.267675 kubelet[2697]: E0515 12:53:29.267538 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:29.300336 kubelet[2697]: I0515 12:53:29.298636 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jq7vf" podStartSLOduration=58.298620217 podStartE2EDuration="58.298620217s" podCreationTimestamp="2025-05-15 12:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:53:29.283603431 +0000 UTC m=+63.344761849" watchObservedRunningTime="2025-05-15 12:53:29.298620217 +0000 UTC m=+63.359778635" May 15 12:53:29.318383 containerd[1555]: time="2025-05-15T12:53:29.318063054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b45b489c-mh8vn,Uid:a3c86fa8-07d6-4bd0-ba95-5246fc2365f5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f\"" May 15 12:53:29.600713 systemd-networkd[1458]: cali271a8e7bcbb: Gained IPv6LL May 15 12:53:29.921144 systemd-networkd[1458]: cali7b382c0b257: Gained IPv6LL May 15 12:53:30.273906 kubelet[2697]: E0515 12:53:30.273735 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:31.008807 systemd-networkd[1458]: cali4b778d3f5c4: Gained IPv6LL May 15 12:53:31.278173 kubelet[2697]: E0515 12:53:31.278080 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:31.763954 containerd[1555]: time="2025-05-15T12:53:31.763909906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:31.764875 containerd[1555]: time="2025-05-15T12:53:31.764729946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 15 12:53:31.765365 containerd[1555]: time="2025-05-15T12:53:31.765333766Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:31.766992 containerd[1555]: time="2025-05-15T12:53:31.766943267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:31.767667 containerd[1555]: time="2025-05-15T12:53:31.767539197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 7.403062474s" May 15 12:53:31.767667 containerd[1555]: time="2025-05-15T12:53:31.767590057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 12:53:31.770041 containerd[1555]: time="2025-05-15T12:53:31.769840958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 12:53:31.771159 containerd[1555]: time="2025-05-15T12:53:31.771122598Z" level=info msg="CreateContainer within sandbox \"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 12:53:31.779833 containerd[1555]: time="2025-05-15T12:53:31.779078311Z" level=info msg="Container 51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:31.792778 containerd[1555]: time="2025-05-15T12:53:31.792733475Z" level=info msg="CreateContainer within sandbox \"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\"" May 15 12:53:31.793488 containerd[1555]: time="2025-05-15T12:53:31.793436645Z" level=info msg="StartContainer for \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\"" May 15 12:53:31.795082 containerd[1555]: time="2025-05-15T12:53:31.795049196Z" level=info msg="connecting to shim 51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd" address="unix:///run/containerd/s/b3afe7eded49235f0e7c109b92cbc9fe6885e8a6cbfd1ee610878633394d4370" protocol=ttrpc version=3 May 15 12:53:31.826688 systemd[1]: Started cri-containerd-51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd.scope - libcontainer container 51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd. May 15 12:53:31.878609 containerd[1555]: time="2025-05-15T12:53:31.878165481Z" level=info msg="StartContainer for \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\" returns successfully" May 15 12:53:32.295328 kubelet[2697]: I0515 12:53:32.294788 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d86d7c9bb-64dfc" podStartSLOduration=45.889695649 podStartE2EDuration="53.294773724s" podCreationTimestamp="2025-05-15 12:52:39 +0000 UTC" firstStartedPulling="2025-05-15 12:53:24.363346082 +0000 UTC m=+58.424504500" lastFinishedPulling="2025-05-15 12:53:31.768424157 +0000 UTC m=+65.829582575" observedRunningTime="2025-05-15 12:53:32.294017434 +0000 UTC m=+66.355175852" watchObservedRunningTime="2025-05-15 12:53:32.294773724 +0000 UTC m=+66.355932142" May 15 12:53:33.284268 kubelet[2697]: I0515 12:53:33.284238 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:53:36.782409 containerd[1555]: time="2025-05-15T12:53:36.782357104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:36.783305 containerd[1555]: time="2025-05-15T12:53:36.783271087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 15 12:53:36.783956 containerd[1555]: time="2025-05-15T12:53:36.783892726Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:36.785244 containerd[1555]: time="2025-05-15T12:53:36.785218205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:36.785868 containerd[1555]: time="2025-05-15T12:53:36.785740282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 5.015877234s" May 15 12:53:36.785868 containerd[1555]: time="2025-05-15T12:53:36.785770143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 15 12:53:36.787244 containerd[1555]: time="2025-05-15T12:53:36.787218333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 12:53:36.788930 containerd[1555]: time="2025-05-15T12:53:36.788853377Z" level=info msg="CreateContainer within sandbox \"0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 12:53:36.796244 containerd[1555]: time="2025-05-15T12:53:36.795866036Z" level=info msg="Container 75317d3f819c88d12536bdf74f3f32bf7d93e11e6e3e426068da446f06deacca: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:36.802756 containerd[1555]: time="2025-05-15T12:53:36.802730204Z" level=info msg="CreateContainer within sandbox \"0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"75317d3f819c88d12536bdf74f3f32bf7d93e11e6e3e426068da446f06deacca\"" May 15 12:53:36.803839 containerd[1555]: time="2025-05-15T12:53:36.803784049Z" level=info msg="StartContainer for \"75317d3f819c88d12536bdf74f3f32bf7d93e11e6e3e426068da446f06deacca\"" May 15 12:53:36.805641 containerd[1555]: time="2025-05-15T12:53:36.805550384Z" level=info msg="connecting to shim 75317d3f819c88d12536bdf74f3f32bf7d93e11e6e3e426068da446f06deacca" address="unix:///run/containerd/s/4080a192d1ac6026d9f2c3ce752d67b0e9498b55f2e474ddcb835928b6e83335" protocol=ttrpc version=3 May 15 12:53:36.832681 systemd[1]: Started cri-containerd-75317d3f819c88d12536bdf74f3f32bf7d93e11e6e3e426068da446f06deacca.scope - libcontainer container 75317d3f819c88d12536bdf74f3f32bf7d93e11e6e3e426068da446f06deacca. May 15 12:53:36.893589 containerd[1555]: time="2025-05-15T12:53:36.891754599Z" level=info msg="StartContainer for \"75317d3f819c88d12536bdf74f3f32bf7d93e11e6e3e426068da446f06deacca\" returns successfully" May 15 12:53:37.026265 kubelet[2697]: E0515 12:53:37.026237 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:40.010701 containerd[1555]: time="2025-05-15T12:53:40.010521787Z" level=info msg="StopContainer for \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\" with timeout 300 (s)" May 15 12:53:40.013357 containerd[1555]: time="2025-05-15T12:53:40.013329142Z" level=info msg="Stop container \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\" with signal terminated" May 15 12:53:40.241118 containerd[1555]: time="2025-05-15T12:53:40.241078197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" id:\"916474d330409d83c020982b5fca5bbd19587bf392ad7b98c59e37d978803d9b\" pid:4717 exited_at:{seconds:1747313620 nanos:240455069}" May 15 12:53:40.267786 containerd[1555]: time="2025-05-15T12:53:40.267677325Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" id:\"eca2fd7f79a5604a6edca1d16fa6bc6d31b785f10a9962747369a7a16d07cac2\" pid:4720 exited_at:{seconds:1747313620 nanos:266823464}" May 15 12:53:40.271934 containerd[1555]: time="2025-05-15T12:53:40.271908529Z" level=info msg="StopContainer for \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" with timeout 5 (s)" May 15 12:53:40.272344 containerd[1555]: time="2025-05-15T12:53:40.272316534Z" level=info msg="Stop container \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" with signal terminated" May 15 12:53:40.302773 systemd[1]: cri-containerd-2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499.scope: Deactivated successfully. May 15 12:53:40.303299 systemd[1]: cri-containerd-2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499.scope: Consumed 1.778s CPU time, 181.3M memory peak, 644K written to disk. May 15 12:53:40.306417 containerd[1555]: time="2025-05-15T12:53:40.306380577Z" level=info msg="received exit event container_id:\"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" id:\"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" pid:3675 exited_at:{seconds:1747313620 nanos:306219525}" May 15 12:53:40.307439 containerd[1555]: time="2025-05-15T12:53:40.307418930Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" id:\"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" pid:3675 exited_at:{seconds:1747313620 nanos:306219525}" May 15 12:53:40.332123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499-rootfs.mount: Deactivated successfully. May 15 12:53:40.357958 containerd[1555]: time="2025-05-15T12:53:40.357904512Z" level=info msg="StopContainer for \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" returns successfully" May 15 12:53:40.359488 containerd[1555]: time="2025-05-15T12:53:40.359467492Z" level=info msg="StopPodSandbox for \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\"" May 15 12:53:40.359540 containerd[1555]: time="2025-05-15T12:53:40.359520552Z" level=info msg="Container to stop \"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:53:40.359540 containerd[1555]: time="2025-05-15T12:53:40.359531223Z" level=info msg="Container to stop \"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:53:40.359540 containerd[1555]: time="2025-05-15T12:53:40.359538973Z" level=info msg="Container to stop \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:53:40.365440 systemd[1]: cri-containerd-49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea.scope: Deactivated successfully. May 15 12:53:40.367425 containerd[1555]: time="2025-05-15T12:53:40.367381052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" id:\"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" pid:3235 exit_status:137 exited_at:{seconds:1747313620 nanos:366002625}" May 15 12:53:40.395039 containerd[1555]: time="2025-05-15T12:53:40.395009783Z" level=info msg="shim disconnected" id=49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea namespace=k8s.io May 15 12:53:40.395039 containerd[1555]: time="2025-05-15T12:53:40.395035064Z" level=warning msg="cleaning up after shim disconnected" id=49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea namespace=k8s.io May 15 12:53:40.395199 containerd[1555]: time="2025-05-15T12:53:40.395043304Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:53:40.395889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea-rootfs.mount: Deactivated successfully. May 15 12:53:40.414148 containerd[1555]: time="2025-05-15T12:53:40.414105406Z" level=info msg="received exit event sandbox_id:\"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" exit_status:137 exited_at:{seconds:1747313620 nanos:366002625}" May 15 12:53:40.416290 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea-shm.mount: Deactivated successfully. May 15 12:53:40.416538 containerd[1555]: time="2025-05-15T12:53:40.416516667Z" level=info msg="TearDown network for sandbox \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" successfully" May 15 12:53:40.416737 containerd[1555]: time="2025-05-15T12:53:40.416599598Z" level=info msg="StopPodSandbox for \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" returns successfully" May 15 12:53:40.453217 kubelet[2697]: I0515 12:53:40.453156 2697 memory_manager.go:355] "RemoveStaleState removing state" podUID="bb718bd4-90ab-4183-91f8-0d4b9a2bab80" containerName="calico-node" May 15 12:53:40.466282 systemd[1]: Created slice kubepods-besteffort-pod13fd7d46_d77e_4d54_aec1_5a6b61c3b986.slice - libcontainer container kubepods-besteffort-pod13fd7d46_d77e_4d54_aec1_5a6b61c3b986.slice. May 15 12:53:40.500012 kubelet[2697]: I0515 12:53:40.499976 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-cni-log-dir\") pod \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " May 15 12:53:40.500012 kubelet[2697]: I0515 12:53:40.500015 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-xtables-lock\") pod \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " May 15 12:53:40.500490 kubelet[2697]: I0515 12:53:40.500042 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-node-certs\") pod \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " May 15 12:53:40.500490 kubelet[2697]: I0515 12:53:40.500062 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-policysync\") pod \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " May 15 12:53:40.500490 kubelet[2697]: I0515 12:53:40.500080 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-tigera-ca-bundle\") pod \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " May 15 12:53:40.500490 kubelet[2697]: I0515 12:53:40.500112 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-var-lib-calico\") pod \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " May 15 12:53:40.500490 kubelet[2697]: I0515 12:53:40.500128 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-var-run-calico\") pod \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " May 15 12:53:40.500490 kubelet[2697]: I0515 12:53:40.500143 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-lib-modules\") pod \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " May 15 12:53:40.500742 kubelet[2697]: I0515 12:53:40.500170 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-cni-net-dir\") pod \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " May 15 12:53:40.500742 kubelet[2697]: I0515 12:53:40.500186 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-flexvol-driver-host\") pod \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " May 15 12:53:40.500742 kubelet[2697]: I0515 12:53:40.500204 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6q6f\" (UniqueName: \"kubernetes.io/projected/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-kube-api-access-k6q6f\") pod \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " May 15 12:53:40.500742 kubelet[2697]: I0515 12:53:40.500226 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-cni-bin-dir\") pod \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\" (UID: \"bb718bd4-90ab-4183-91f8-0d4b9a2bab80\") " May 15 12:53:40.500742 kubelet[2697]: I0515 12:53:40.500299 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/13fd7d46-d77e-4d54-aec1-5a6b61c3b986-policysync\") pod \"calico-node-8gsp7\" (UID: \"13fd7d46-d77e-4d54-aec1-5a6b61c3b986\") " pod="calico-system/calico-node-8gsp7" May 15 12:53:40.500742 kubelet[2697]: I0515 12:53:40.500319 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/13fd7d46-d77e-4d54-aec1-5a6b61c3b986-cni-log-dir\") pod \"calico-node-8gsp7\" (UID: \"13fd7d46-d77e-4d54-aec1-5a6b61c3b986\") " pod="calico-system/calico-node-8gsp7" May 15 12:53:40.500902 kubelet[2697]: I0515 12:53:40.500341 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhbmh\" (UniqueName: \"kubernetes.io/projected/13fd7d46-d77e-4d54-aec1-5a6b61c3b986-kube-api-access-fhbmh\") pod \"calico-node-8gsp7\" (UID: \"13fd7d46-d77e-4d54-aec1-5a6b61c3b986\") " pod="calico-system/calico-node-8gsp7" May 15 12:53:40.500902 kubelet[2697]: I0515 12:53:40.500357 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/13fd7d46-d77e-4d54-aec1-5a6b61c3b986-node-certs\") pod \"calico-node-8gsp7\" (UID: \"13fd7d46-d77e-4d54-aec1-5a6b61c3b986\") " pod="calico-system/calico-node-8gsp7" May 15 12:53:40.500902 kubelet[2697]: I0515 12:53:40.500373 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/13fd7d46-d77e-4d54-aec1-5a6b61c3b986-cni-bin-dir\") pod \"calico-node-8gsp7\" (UID: \"13fd7d46-d77e-4d54-aec1-5a6b61c3b986\") " pod="calico-system/calico-node-8gsp7" May 15 12:53:40.500902 kubelet[2697]: I0515 12:53:40.500394 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13fd7d46-d77e-4d54-aec1-5a6b61c3b986-lib-modules\") pod \"calico-node-8gsp7\" (UID: \"13fd7d46-d77e-4d54-aec1-5a6b61c3b986\") " pod="calico-system/calico-node-8gsp7" May 15 12:53:40.500902 kubelet[2697]: I0515 12:53:40.500412 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13fd7d46-d77e-4d54-aec1-5a6b61c3b986-var-lib-calico\") pod \"calico-node-8gsp7\" (UID: \"13fd7d46-d77e-4d54-aec1-5a6b61c3b986\") " pod="calico-system/calico-node-8gsp7" May 15 12:53:40.501632 kubelet[2697]: I0515 12:53:40.500436 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/13fd7d46-d77e-4d54-aec1-5a6b61c3b986-cni-net-dir\") pod \"calico-node-8gsp7\" (UID: \"13fd7d46-d77e-4d54-aec1-5a6b61c3b986\") " pod="calico-system/calico-node-8gsp7" May 15 12:53:40.501632 kubelet[2697]: I0515 12:53:40.500460 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/13fd7d46-d77e-4d54-aec1-5a6b61c3b986-flexvol-driver-host\") pod \"calico-node-8gsp7\" (UID: \"13fd7d46-d77e-4d54-aec1-5a6b61c3b986\") " pod="calico-system/calico-node-8gsp7" May 15 12:53:40.501632 kubelet[2697]: I0515 12:53:40.500478 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13fd7d46-d77e-4d54-aec1-5a6b61c3b986-tigera-ca-bundle\") pod \"calico-node-8gsp7\" (UID: \"13fd7d46-d77e-4d54-aec1-5a6b61c3b986\") " pod="calico-system/calico-node-8gsp7" May 15 12:53:40.501632 kubelet[2697]: I0515 12:53:40.500493 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/13fd7d46-d77e-4d54-aec1-5a6b61c3b986-var-run-calico\") pod \"calico-node-8gsp7\" (UID: \"13fd7d46-d77e-4d54-aec1-5a6b61c3b986\") " pod="calico-system/calico-node-8gsp7" May 15 12:53:40.501632 kubelet[2697]: I0515 12:53:40.500516 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13fd7d46-d77e-4d54-aec1-5a6b61c3b986-xtables-lock\") pod \"calico-node-8gsp7\" (UID: \"13fd7d46-d77e-4d54-aec1-5a6b61c3b986\") " pod="calico-system/calico-node-8gsp7" May 15 12:53:40.501749 kubelet[2697]: I0515 12:53:40.501088 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bb718bd4-90ab-4183-91f8-0d4b9a2bab80" (UID: "bb718bd4-90ab-4183-91f8-0d4b9a2bab80"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:53:40.501749 kubelet[2697]: I0515 12:53:40.501154 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-policysync" (OuterVolumeSpecName: "policysync") pod "bb718bd4-90ab-4183-91f8-0d4b9a2bab80" (UID: "bb718bd4-90ab-4183-91f8-0d4b9a2bab80"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:53:40.505456 kubelet[2697]: I0515 12:53:40.502328 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "bb718bd4-90ab-4183-91f8-0d4b9a2bab80" (UID: "bb718bd4-90ab-4183-91f8-0d4b9a2bab80"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:53:40.505456 kubelet[2697]: I0515 12:53:40.502357 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "bb718bd4-90ab-4183-91f8-0d4b9a2bab80" (UID: "bb718bd4-90ab-4183-91f8-0d4b9a2bab80"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:53:40.505456 kubelet[2697]: I0515 12:53:40.502374 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bb718bd4-90ab-4183-91f8-0d4b9a2bab80" (UID: "bb718bd4-90ab-4183-91f8-0d4b9a2bab80"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:53:40.505456 kubelet[2697]: I0515 12:53:40.502389 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "bb718bd4-90ab-4183-91f8-0d4b9a2bab80" (UID: "bb718bd4-90ab-4183-91f8-0d4b9a2bab80"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:53:40.505456 kubelet[2697]: I0515 12:53:40.502405 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "bb718bd4-90ab-4183-91f8-0d4b9a2bab80" (UID: "bb718bd4-90ab-4183-91f8-0d4b9a2bab80"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:53:40.506246 kubelet[2697]: I0515 12:53:40.505821 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "bb718bd4-90ab-4183-91f8-0d4b9a2bab80" (UID: "bb718bd4-90ab-4183-91f8-0d4b9a2bab80"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:53:40.506367 kubelet[2697]: I0515 12:53:40.506335 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "bb718bd4-90ab-4183-91f8-0d4b9a2bab80" (UID: "bb718bd4-90ab-4183-91f8-0d4b9a2bab80"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 12:53:40.508434 systemd[1]: var-lib-kubelet-pods-bb718bd4\x2d90ab\x2d4183\x2d91f8\x2d0d4b9a2bab80-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 15 12:53:40.510502 kubelet[2697]: I0515 12:53:40.510482 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-node-certs" (OuterVolumeSpecName: "node-certs") pod "bb718bd4-90ab-4183-91f8-0d4b9a2bab80" (UID: "bb718bd4-90ab-4183-91f8-0d4b9a2bab80"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 12:53:40.513107 kubelet[2697]: I0515 12:53:40.513077 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-kube-api-access-k6q6f" (OuterVolumeSpecName: "kube-api-access-k6q6f") pod "bb718bd4-90ab-4183-91f8-0d4b9a2bab80" (UID: "bb718bd4-90ab-4183-91f8-0d4b9a2bab80"). InnerVolumeSpecName "kube-api-access-k6q6f". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 12:53:40.513456 systemd[1]: var-lib-kubelet-pods-bb718bd4\x2d90ab\x2d4183\x2d91f8\x2d0d4b9a2bab80-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk6q6f.mount: Deactivated successfully. May 15 12:53:40.518623 kubelet[2697]: I0515 12:53:40.518076 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "bb718bd4-90ab-4183-91f8-0d4b9a2bab80" (UID: "bb718bd4-90ab-4183-91f8-0d4b9a2bab80"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 12:53:40.601322 kubelet[2697]: I0515 12:53:40.601289 2697 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-cni-bin-dir\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:40.601322 kubelet[2697]: I0515 12:53:40.601318 2697 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-cni-log-dir\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:40.601322 kubelet[2697]: I0515 12:53:40.601328 2697 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-node-certs\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:40.601483 kubelet[2697]: I0515 12:53:40.601338 2697 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-xtables-lock\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:40.601483 kubelet[2697]: I0515 12:53:40.601349 2697 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-policysync\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:40.601483 kubelet[2697]: I0515 12:53:40.601357 2697 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-var-lib-calico\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:40.601483 kubelet[2697]: I0515 12:53:40.601392 2697 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-tigera-ca-bundle\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:40.601483 kubelet[2697]: I0515 12:53:40.601408 2697 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-var-run-calico\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:40.601483 kubelet[2697]: I0515 12:53:40.601417 2697 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k6q6f\" (UniqueName: \"kubernetes.io/projected/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-kube-api-access-k6q6f\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:40.601483 kubelet[2697]: I0515 12:53:40.601429 2697 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-lib-modules\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:40.601483 kubelet[2697]: I0515 12:53:40.601438 2697 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-cni-net-dir\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:40.601732 kubelet[2697]: I0515 12:53:40.601447 2697 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bb718bd4-90ab-4183-91f8-0d4b9a2bab80-flexvol-driver-host\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:40.770251 kubelet[2697]: E0515 12:53:40.769905 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:40.771197 containerd[1555]: time="2025-05-15T12:53:40.771044234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8gsp7,Uid:13fd7d46-d77e-4d54-aec1-5a6b61c3b986,Namespace:calico-system,Attempt:0,}" May 15 12:53:40.791337 containerd[1555]: time="2025-05-15T12:53:40.791255160Z" level=info msg="connecting to shim 89142fcf6da934e8d0b1dda0e615fcf5bf32c0b4c19bd15e5f7d6ffc5611b81f" address="unix:///run/containerd/s/25238cfe83fa117681853be5a9c0c1743f1ca972309c17543b5ba7c5a7b746ac" namespace=k8s.io protocol=ttrpc version=3 May 15 12:53:40.821732 systemd[1]: Started cri-containerd-89142fcf6da934e8d0b1dda0e615fcf5bf32c0b4c19bd15e5f7d6ffc5611b81f.scope - libcontainer container 89142fcf6da934e8d0b1dda0e615fcf5bf32c0b4c19bd15e5f7d6ffc5611b81f. May 15 12:53:40.853131 containerd[1555]: time="2025-05-15T12:53:40.853090386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8gsp7,Uid:13fd7d46-d77e-4d54-aec1-5a6b61c3b986,Namespace:calico-system,Attempt:0,} returns sandbox id \"89142fcf6da934e8d0b1dda0e615fcf5bf32c0b4c19bd15e5f7d6ffc5611b81f\"" May 15 12:53:40.854441 kubelet[2697]: E0515 12:53:40.854403 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:40.856932 containerd[1555]: time="2025-05-15T12:53:40.856866474Z" level=info msg="CreateContainer within sandbox \"89142fcf6da934e8d0b1dda0e615fcf5bf32c0b4c19bd15e5f7d6ffc5611b81f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 12:53:40.862994 containerd[1555]: time="2025-05-15T12:53:40.862948271Z" level=info msg="Container 4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:40.873891 containerd[1555]: time="2025-05-15T12:53:40.873766779Z" level=info msg="CreateContainer within sandbox \"89142fcf6da934e8d0b1dda0e615fcf5bf32c0b4c19bd15e5f7d6ffc5611b81f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742\"" May 15 12:53:40.874546 containerd[1555]: time="2025-05-15T12:53:40.874507338Z" level=info msg="StartContainer for \"4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742\"" May 15 12:53:40.876193 containerd[1555]: time="2025-05-15T12:53:40.876153869Z" level=info msg="connecting to shim 4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742" address="unix:///run/containerd/s/25238cfe83fa117681853be5a9c0c1743f1ca972309c17543b5ba7c5a7b746ac" protocol=ttrpc version=3 May 15 12:53:40.899724 systemd[1]: Started cri-containerd-4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742.scope - libcontainer container 4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742. May 15 12:53:40.949067 containerd[1555]: time="2025-05-15T12:53:40.948936244Z" level=info msg="StartContainer for \"4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742\" returns successfully" May 15 12:53:40.973738 systemd[1]: cri-containerd-4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742.scope: Deactivated successfully. May 15 12:53:40.974394 systemd[1]: cri-containerd-4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742.scope: Consumed 41ms CPU time, 8.3M memory peak, 4K read from disk, 6.3M written to disk. May 15 12:53:40.974910 containerd[1555]: time="2025-05-15T12:53:40.974879114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742\" id:\"4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742\" pid:4871 exited_at:{seconds:1747313620 nanos:974105454}" May 15 12:53:40.975026 containerd[1555]: time="2025-05-15T12:53:40.974987585Z" level=info msg="received exit event container_id:\"4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742\" id:\"4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742\" pid:4871 exited_at:{seconds:1747313620 nanos:974105454}" May 15 12:53:41.312945 kubelet[2697]: I0515 12:53:41.312920 2697 scope.go:117] "RemoveContainer" containerID="2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499" May 15 12:53:41.321260 kubelet[2697]: E0515 12:53:41.321231 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:41.322910 containerd[1555]: time="2025-05-15T12:53:41.322874185Z" level=info msg="RemoveContainer for \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\"" May 15 12:53:41.324095 systemd[1]: Removed slice kubepods-besteffort-podbb718bd4_90ab_4183_91f8_0d4b9a2bab80.slice - libcontainer container kubepods-besteffort-podbb718bd4_90ab_4183_91f8_0d4b9a2bab80.slice. May 15 12:53:41.324184 systemd[1]: kubepods-besteffort-podbb718bd4_90ab_4183_91f8_0d4b9a2bab80.slice: Consumed 4.454s CPU time, 310.2M memory peak, 161.1M written to disk. May 15 12:53:41.333253 containerd[1555]: time="2025-05-15T12:53:41.333064511Z" level=info msg="CreateContainer within sandbox \"89142fcf6da934e8d0b1dda0e615fcf5bf32c0b4c19bd15e5f7d6ffc5611b81f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 12:53:41.341623 systemd[1]: var-lib-kubelet-pods-bb718bd4\x2d90ab\x2d4183\x2d91f8\x2d0d4b9a2bab80-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 15 12:53:41.351473 containerd[1555]: time="2025-05-15T12:53:41.350939522Z" level=info msg="RemoveContainer for \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" returns successfully" May 15 12:53:41.351963 kubelet[2697]: I0515 12:53:41.351935 2697 scope.go:117] "RemoveContainer" containerID="174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2" May 15 12:53:41.357279 containerd[1555]: time="2025-05-15T12:53:41.357081708Z" level=info msg="RemoveContainer for \"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\"" May 15 12:53:41.359545 systemd[1]: cri-containerd-80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5.scope: Deactivated successfully. May 15 12:53:41.367635 containerd[1555]: time="2025-05-15T12:53:41.367043921Z" level=info msg="received exit event container_id:\"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\" id:\"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\" pid:3359 exit_status:1 exited_at:{seconds:1747313621 nanos:364878574}" May 15 12:53:41.367635 containerd[1555]: time="2025-05-15T12:53:41.367177303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\" id:\"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\" pid:3359 exit_status:1 exited_at:{seconds:1747313621 nanos:364878574}" May 15 12:53:41.389012 containerd[1555]: time="2025-05-15T12:53:41.388983332Z" level=info msg="RemoveContainer for \"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\" returns successfully" May 15 12:53:41.394633 containerd[1555]: time="2025-05-15T12:53:41.393630360Z" level=info msg="Container 8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:41.396384 kubelet[2697]: I0515 12:53:41.396362 2697 scope.go:117] "RemoveContainer" containerID="e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7" May 15 12:53:41.398054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559655102.mount: Deactivated successfully. May 15 12:53:41.402117 containerd[1555]: time="2025-05-15T12:53:41.402085095Z" level=info msg="RemoveContainer for \"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\"" May 15 12:53:41.413926 containerd[1555]: time="2025-05-15T12:53:41.413881080Z" level=info msg="CreateContainer within sandbox \"89142fcf6da934e8d0b1dda0e615fcf5bf32c0b4c19bd15e5f7d6ffc5611b81f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6\"" May 15 12:53:41.415466 containerd[1555]: time="2025-05-15T12:53:41.415445510Z" level=info msg="StartContainer for \"8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6\"" May 15 12:53:41.417205 containerd[1555]: time="2025-05-15T12:53:41.417016539Z" level=info msg="RemoveContainer for \"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\" returns successfully" May 15 12:53:41.417790 kubelet[2697]: I0515 12:53:41.417704 2697 scope.go:117] "RemoveContainer" containerID="2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499" May 15 12:53:41.418749 containerd[1555]: time="2025-05-15T12:53:41.418448087Z" level=error msg="ContainerStatus for \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\": not found" May 15 12:53:41.418916 kubelet[2697]: E0515 12:53:41.418861 2697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\": not found" containerID="2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499" May 15 12:53:41.419282 kubelet[2697]: I0515 12:53:41.419000 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499"} err="failed to get container status \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499\": not found" May 15 12:53:41.419282 kubelet[2697]: I0515 12:53:41.419226 2697 scope.go:117] "RemoveContainer" containerID="174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2" May 15 12:53:41.419899 containerd[1555]: time="2025-05-15T12:53:41.419779823Z" level=error msg="ContainerStatus for \"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\": not found" May 15 12:53:41.420153 kubelet[2697]: E0515 12:53:41.420132 2697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\": not found" containerID="174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2" May 15 12:53:41.420194 kubelet[2697]: I0515 12:53:41.420156 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2"} err="failed to get container status \"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2\": not found" May 15 12:53:41.420194 kubelet[2697]: I0515 12:53:41.420171 2697 scope.go:117] "RemoveContainer" containerID="e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7" May 15 12:53:41.420367 containerd[1555]: time="2025-05-15T12:53:41.420333270Z" level=error msg="ContainerStatus for \"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\": not found" May 15 12:53:41.420875 containerd[1555]: time="2025-05-15T12:53:41.420820346Z" level=info msg="connecting to shim 8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6" address="unix:///run/containerd/s/25238cfe83fa117681853be5a9c0c1743f1ca972309c17543b5ba7c5a7b746ac" protocol=ttrpc version=3 May 15 12:53:41.421100 kubelet[2697]: E0515 12:53:41.421076 2697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\": not found" containerID="e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7" May 15 12:53:41.421100 kubelet[2697]: I0515 12:53:41.421099 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7"} err="failed to get container status \"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\": rpc error: code = NotFound desc = an error occurred when try to find container \"e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7\": not found" May 15 12:53:41.445404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5-rootfs.mount: Deactivated successfully. May 15 12:53:41.457705 containerd[1555]: time="2025-05-15T12:53:41.457669882Z" level=info msg="StopContainer for \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\" returns successfully" May 15 12:53:41.458216 containerd[1555]: time="2025-05-15T12:53:41.458198118Z" level=info msg="StopPodSandbox for \"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\"" May 15 12:53:41.458640 containerd[1555]: time="2025-05-15T12:53:41.458529882Z" level=info msg="Container to stop \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:53:41.459702 systemd[1]: Started cri-containerd-8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6.scope - libcontainer container 8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6. May 15 12:53:41.472034 systemd[1]: cri-containerd-1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6.scope: Deactivated successfully. May 15 12:53:41.474821 containerd[1555]: time="2025-05-15T12:53:41.474631681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\" id:\"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\" pid:3231 exit_status:137 exited_at:{seconds:1747313621 nanos:473936033}" May 15 12:53:41.529088 containerd[1555]: time="2025-05-15T12:53:41.529057254Z" level=info msg="shim disconnected" id=1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6 namespace=k8s.io May 15 12:53:41.529968 containerd[1555]: time="2025-05-15T12:53:41.529788313Z" level=warning msg="cleaning up after shim disconnected" id=1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6 namespace=k8s.io May 15 12:53:41.529968 containerd[1555]: time="2025-05-15T12:53:41.529804404Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:53:41.531586 containerd[1555]: time="2025-05-15T12:53:41.530990448Z" level=info msg="received exit event sandbox_id:\"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\" exit_status:137 exited_at:{seconds:1747313621 nanos:473936033}" May 15 12:53:41.531586 containerd[1555]: time="2025-05-15T12:53:41.531380823Z" level=info msg="TearDown network for sandbox \"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\" successfully" May 15 12:53:41.531586 containerd[1555]: time="2025-05-15T12:53:41.531395943Z" level=info msg="StopPodSandbox for \"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\" returns successfully" May 15 12:53:41.542830 containerd[1555]: time="2025-05-15T12:53:41.542704383Z" level=info msg="StartContainer for \"8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6\" returns successfully" May 15 12:53:41.611416 kubelet[2697]: I0515 12:53:41.611379 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d96lv\" (UniqueName: \"kubernetes.io/projected/c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8-kube-api-access-d96lv\") pod \"c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8\" (UID: \"c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8\") " May 15 12:53:41.612680 kubelet[2697]: I0515 12:53:41.612614 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8-typha-certs\") pod \"c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8\" (UID: \"c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8\") " May 15 12:53:41.612680 kubelet[2697]: I0515 12:53:41.612666 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8-tigera-ca-bundle\") pod \"c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8\" (UID: \"c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8\") " May 15 12:53:41.616093 kubelet[2697]: I0515 12:53:41.616065 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8-kube-api-access-d96lv" (OuterVolumeSpecName: "kube-api-access-d96lv") pod "c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8" (UID: "c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8"). InnerVolumeSpecName "kube-api-access-d96lv". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 12:53:41.618400 kubelet[2697]: I0515 12:53:41.618207 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8" (UID: "c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 12:53:41.618690 kubelet[2697]: I0515 12:53:41.618632 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8" (UID: "c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 12:53:41.713657 kubelet[2697]: I0515 12:53:41.713532 2697 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8-tigera-ca-bundle\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:41.714474 kubelet[2697]: I0515 12:53:41.714279 2697 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d96lv\" (UniqueName: \"kubernetes.io/projected/c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8-kube-api-access-d96lv\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:41.714474 kubelet[2697]: I0515 12:53:41.714295 2697 reconciler_common.go:299] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8-typha-certs\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:41.720286 kubelet[2697]: I0515 12:53:41.720231 2697 memory_manager.go:355] "RemoveStaleState removing state" podUID="c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8" containerName="calico-typha" May 15 12:53:41.730853 systemd[1]: Created slice kubepods-besteffort-pod6715a9a8_c09e_4f97_83b2_4f7cb833ffde.slice - libcontainer container kubepods-besteffort-pod6715a9a8_c09e_4f97_83b2_4f7cb833ffde.slice. May 15 12:53:41.815224 kubelet[2697]: I0515 12:53:41.815081 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6715a9a8-c09e-4f97-83b2-4f7cb833ffde-tigera-ca-bundle\") pod \"calico-typha-866f4bb647-gk4r4\" (UID: \"6715a9a8-c09e-4f97-83b2-4f7cb833ffde\") " pod="calico-system/calico-typha-866f4bb647-gk4r4" May 15 12:53:41.815224 kubelet[2697]: I0515 12:53:41.815129 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6715a9a8-c09e-4f97-83b2-4f7cb833ffde-typha-certs\") pod \"calico-typha-866f4bb647-gk4r4\" (UID: \"6715a9a8-c09e-4f97-83b2-4f7cb833ffde\") " pod="calico-system/calico-typha-866f4bb647-gk4r4" May 15 12:53:41.815224 kubelet[2697]: I0515 12:53:41.815160 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sq4k\" (UniqueName: \"kubernetes.io/projected/6715a9a8-c09e-4f97-83b2-4f7cb833ffde-kube-api-access-2sq4k\") pod \"calico-typha-866f4bb647-gk4r4\" (UID: \"6715a9a8-c09e-4f97-83b2-4f7cb833ffde\") " pod="calico-system/calico-typha-866f4bb647-gk4r4" May 15 12:53:42.028430 kubelet[2697]: I0515 12:53:42.028344 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb718bd4-90ab-4183-91f8-0d4b9a2bab80" path="/var/lib/kubelet/pods/bb718bd4-90ab-4183-91f8-0d4b9a2bab80/volumes" May 15 12:53:42.035171 kubelet[2697]: E0515 12:53:42.035135 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:42.035344 systemd[1]: Removed slice kubepods-besteffort-podc040ea10_a6a4_4ebe_bdfa_0023f6fe49e8.slice - libcontainer container kubepods-besteffort-podc040ea10_a6a4_4ebe_bdfa_0023f6fe49e8.slice. May 15 12:53:42.036335 containerd[1555]: time="2025-05-15T12:53:42.036307814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-866f4bb647-gk4r4,Uid:6715a9a8-c09e-4f97-83b2-4f7cb833ffde,Namespace:calico-system,Attempt:0,}" May 15 12:53:42.055668 containerd[1555]: time="2025-05-15T12:53:42.055305902Z" level=info msg="connecting to shim a5009569b6c37745c600bff57670ceab1d79fe4166ee76ebf5ee7440d882ae0d" address="unix:///run/containerd/s/50351880e4750e0bf1ff59f24ea6bbdbeff76fbecdcfd2a35d690cd8b90c0488" namespace=k8s.io protocol=ttrpc version=3 May 15 12:53:42.079914 systemd[1]: Started cri-containerd-a5009569b6c37745c600bff57670ceab1d79fe4166ee76ebf5ee7440d882ae0d.scope - libcontainer container a5009569b6c37745c600bff57670ceab1d79fe4166ee76ebf5ee7440d882ae0d. May 15 12:53:42.190077 containerd[1555]: time="2025-05-15T12:53:42.189969892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-866f4bb647-gk4r4,Uid:6715a9a8-c09e-4f97-83b2-4f7cb833ffde,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5009569b6c37745c600bff57670ceab1d79fe4166ee76ebf5ee7440d882ae0d\"" May 15 12:53:42.191702 kubelet[2697]: E0515 12:53:42.191596 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:42.204360 containerd[1555]: time="2025-05-15T12:53:42.204312115Z" level=info msg="CreateContainer within sandbox \"a5009569b6c37745c600bff57670ceab1d79fe4166ee76ebf5ee7440d882ae0d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 12:53:42.214799 containerd[1555]: time="2025-05-15T12:53:42.214273635Z" level=info msg="Container 1c6a34f3d4cab8eb1e238264881cba0fd07664d195c442b7bceb0cbc403d1542: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:42.221602 containerd[1555]: time="2025-05-15T12:53:42.221573592Z" level=info msg="CreateContainer within sandbox \"a5009569b6c37745c600bff57670ceab1d79fe4166ee76ebf5ee7440d882ae0d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1c6a34f3d4cab8eb1e238264881cba0fd07664d195c442b7bceb0cbc403d1542\"" May 15 12:53:42.222284 containerd[1555]: time="2025-05-15T12:53:42.222265371Z" level=info msg="StartContainer for \"1c6a34f3d4cab8eb1e238264881cba0fd07664d195c442b7bceb0cbc403d1542\"" May 15 12:53:42.223769 containerd[1555]: time="2025-05-15T12:53:42.223730258Z" level=info msg="connecting to shim 1c6a34f3d4cab8eb1e238264881cba0fd07664d195c442b7bceb0cbc403d1542" address="unix:///run/containerd/s/50351880e4750e0bf1ff59f24ea6bbdbeff76fbecdcfd2a35d690cd8b90c0488" protocol=ttrpc version=3 May 15 12:53:42.251941 systemd[1]: Started cri-containerd-1c6a34f3d4cab8eb1e238264881cba0fd07664d195c442b7bceb0cbc403d1542.scope - libcontainer container 1c6a34f3d4cab8eb1e238264881cba0fd07664d195c442b7bceb0cbc403d1542. May 15 12:53:42.317476 containerd[1555]: time="2025-05-15T12:53:42.317330714Z" level=info msg="StartContainer for \"1c6a34f3d4cab8eb1e238264881cba0fd07664d195c442b7bceb0cbc403d1542\" returns successfully" May 15 12:53:42.327423 kubelet[2697]: I0515 12:53:42.327359 2697 scope.go:117] "RemoveContainer" containerID="80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5" May 15 12:53:42.336940 containerd[1555]: time="2025-05-15T12:53:42.336849869Z" level=info msg="RemoveContainer for \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\"" May 15 12:53:42.338723 systemd[1]: var-lib-kubelet-pods-c040ea10\x2da6a4\x2d4ebe\x2dbdfa\x2d0023f6fe49e8-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 15 12:53:42.340216 kubelet[2697]: E0515 12:53:42.339432 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:42.338835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6-rootfs.mount: Deactivated successfully. May 15 12:53:42.338901 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6-shm.mount: Deactivated successfully. May 15 12:53:42.338966 systemd[1]: var-lib-kubelet-pods-c040ea10\x2da6a4\x2d4ebe\x2dbdfa\x2d0023f6fe49e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd96lv.mount: Deactivated successfully. May 15 12:53:42.339030 systemd[1]: var-lib-kubelet-pods-c040ea10\x2da6a4\x2d4ebe\x2dbdfa\x2d0023f6fe49e8-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 15 12:53:42.359563 containerd[1555]: time="2025-05-15T12:53:42.359388210Z" level=info msg="RemoveContainer for \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\" returns successfully" May 15 12:53:42.359650 kubelet[2697]: I0515 12:53:42.359594 2697 scope.go:117] "RemoveContainer" containerID="80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5" May 15 12:53:42.360685 containerd[1555]: time="2025-05-15T12:53:42.360651195Z" level=error msg="ContainerStatus for \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\": not found" May 15 12:53:42.362160 kubelet[2697]: E0515 12:53:42.361209 2697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\": not found" containerID="80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5" May 15 12:53:42.363849 kubelet[2697]: I0515 12:53:42.362164 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5"} err="failed to get container status \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\": rpc error: code = NotFound desc = an error occurred when try to find container \"80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5\": not found" May 15 12:53:42.364877 kubelet[2697]: E0515 12:53:42.364759 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:42.378331 kubelet[2697]: I0515 12:53:42.377757 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-866f4bb647-gk4r4" podStartSLOduration=3.377744471 podStartE2EDuration="3.377744471s" podCreationTimestamp="2025-05-15 12:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:53:42.363670962 +0000 UTC m=+76.424829380" watchObservedRunningTime="2025-05-15 12:53:42.377744471 +0000 UTC m=+76.438902889" May 15 12:53:42.551892 systemd[1]: cri-containerd-8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6.scope: Deactivated successfully. May 15 12:53:42.553676 systemd[1]: cri-containerd-8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6.scope: Consumed 816ms CPU time, 49.6M memory peak, 24.8M read from disk. May 15 12:53:42.562053 containerd[1555]: time="2025-05-15T12:53:42.562018768Z" level=info msg="received exit event container_id:\"8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6\" id:\"8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6\" pid:4939 exited_at:{seconds:1747313622 nanos:552876358}" May 15 12:53:42.562179 containerd[1555]: time="2025-05-15T12:53:42.562159969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6\" id:\"8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6\" pid:4939 exited_at:{seconds:1747313622 nanos:552876358}" May 15 12:53:42.586710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6-rootfs.mount: Deactivated successfully. May 15 12:53:43.026079 containerd[1555]: time="2025-05-15T12:53:43.025775218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:43.026619 containerd[1555]: time="2025-05-15T12:53:43.026548997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 15 12:53:43.027779 containerd[1555]: time="2025-05-15T12:53:43.027291395Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:43.029184 containerd[1555]: time="2025-05-15T12:53:43.029163357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:43.029637 containerd[1555]: time="2025-05-15T12:53:43.029609713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 6.242362438s" May 15 12:53:43.029693 containerd[1555]: time="2025-05-15T12:53:43.029638683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 15 12:53:43.030541 containerd[1555]: time="2025-05-15T12:53:43.030525143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 12:53:43.045771 containerd[1555]: time="2025-05-15T12:53:43.045730381Z" level=info msg="CreateContainer within sandbox \"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 12:53:43.052006 containerd[1555]: time="2025-05-15T12:53:43.051707931Z" level=info msg="Container 7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:43.058567 containerd[1555]: time="2025-05-15T12:53:43.058534941Z" level=info msg="CreateContainer within sandbox \"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\"" May 15 12:53:43.058965 containerd[1555]: time="2025-05-15T12:53:43.058906055Z" level=info msg="StartContainer for \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\"" May 15 12:53:43.060784 containerd[1555]: time="2025-05-15T12:53:43.060761617Z" level=info msg="connecting to shim 7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a" address="unix:///run/containerd/s/2b34978c0145459aa47dabef8de74dbef959fb7a717376ccb7155704e19bda5d" protocol=ttrpc version=3 May 15 12:53:43.079685 systemd[1]: Started cri-containerd-7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a.scope - libcontainer container 7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a. May 15 12:53:43.136291 containerd[1555]: time="2025-05-15T12:53:43.136254911Z" level=info msg="StartContainer for \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" returns successfully" May 15 12:53:43.330351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3898541361.mount: Deactivated successfully. May 15 12:53:43.370979 kubelet[2697]: E0515 12:53:43.370948 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:43.373913 kubelet[2697]: E0515 12:53:43.373865 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:43.375448 containerd[1555]: time="2025-05-15T12:53:43.375406970Z" level=info msg="StopContainer for \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" with timeout 30 (s)" May 15 12:53:43.379684 containerd[1555]: time="2025-05-15T12:53:43.379656309Z" level=info msg="Stop container \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" with signal terminated" May 15 12:53:43.394368 containerd[1555]: time="2025-05-15T12:53:43.394335661Z" level=info msg="CreateContainer within sandbox \"89142fcf6da934e8d0b1dda0e615fcf5bf32c0b4c19bd15e5f7d6ffc5611b81f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 12:53:43.420403 kubelet[2697]: I0515 12:53:43.415988 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-699d85858d-pssr6" podStartSLOduration=48.846462375 podStartE2EDuration="1m4.415974054s" podCreationTimestamp="2025-05-15 12:52:39 +0000 UTC" firstStartedPulling="2025-05-15 12:53:27.460900953 +0000 UTC m=+61.522059371" lastFinishedPulling="2025-05-15 12:53:43.030412622 +0000 UTC m=+77.091571050" observedRunningTime="2025-05-15 12:53:43.415099104 +0000 UTC m=+77.476257522" watchObservedRunningTime="2025-05-15 12:53:43.415974054 +0000 UTC m=+77.477132472" May 15 12:53:43.421596 containerd[1555]: time="2025-05-15T12:53:43.421503709Z" level=info msg="Container 57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:43.430533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1047440623.mount: Deactivated successfully. May 15 12:53:43.446047 systemd[1]: cri-containerd-7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a.scope: Deactivated successfully. May 15 12:53:43.473572 containerd[1555]: time="2025-05-15T12:53:43.473489928Z" level=info msg="CreateContainer within sandbox \"89142fcf6da934e8d0b1dda0e615fcf5bf32c0b4c19bd15e5f7d6ffc5611b81f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\"" May 15 12:53:43.474660 containerd[1555]: time="2025-05-15T12:53:43.474517600Z" level=info msg="StartContainer for \"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\"" May 15 12:53:43.477584 containerd[1555]: time="2025-05-15T12:53:43.477462864Z" level=info msg="connecting to shim 57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c" address="unix:///run/containerd/s/25238cfe83fa117681853be5a9c0c1743f1ca972309c17543b5ba7c5a7b746ac" protocol=ttrpc version=3 May 15 12:53:43.482574 containerd[1555]: time="2025-05-15T12:53:43.482533023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" id:\"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" pid:5114 exit_status:2 exited_at:{seconds:1747313623 nanos:482163869}" May 15 12:53:43.484568 containerd[1555]: time="2025-05-15T12:53:43.482550904Z" level=info msg="received exit event container_id:\"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" id:\"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" pid:5114 exit_status:2 exited_at:{seconds:1747313623 nanos:482163869}" May 15 12:53:43.487912 containerd[1555]: time="2025-05-15T12:53:43.487850946Z" level=error msg="ExecSync for \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"0b6a33ef4e9d2b8db9158612f423dec8100363b709ccb8148bfb30acd19e6003\": OCI runtime exec failed: exec failed: cannot exec in a stopped container" May 15 12:53:43.488265 kubelet[2697]: E0515 12:53:43.488190 2697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"0b6a33ef4e9d2b8db9158612f423dec8100363b709ccb8148bfb30acd19e6003\": OCI runtime exec failed: exec failed: cannot exec in a stopped container" containerID="7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a" cmd=["/usr/bin/check-status","-r"] May 15 12:53:43.508902 systemd[1]: Started cri-containerd-57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c.scope - libcontainer container 57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c. May 15 12:53:43.511650 containerd[1555]: time="2025-05-15T12:53:43.511594294Z" level=error msg="ExecSync for \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"5d62ec0da1e01dd3f908d0dfff0aad1385e31b59bb640adcb507439917e39790\": cannot exec in a stopped state" May 15 12:53:43.511976 kubelet[2697]: E0515 12:53:43.511935 2697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"5d62ec0da1e01dd3f908d0dfff0aad1385e31b59bb640adcb507439917e39790\": cannot exec in a stopped state" containerID="7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a" cmd=["/usr/bin/check-status","-r"] May 15 12:53:43.574634 containerd[1555]: time="2025-05-15T12:53:43.574441859Z" level=error msg="ExecSync for \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: not found" May 15 12:53:43.577016 kubelet[2697]: E0515 12:53:43.576812 2697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: not found" containerID="7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a" cmd=["/usr/bin/check-status","-r"] May 15 12:53:43.581447 containerd[1555]: time="2025-05-15T12:53:43.574663402Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/2b34978c0145459aa47dabef8de74dbef959fb7a717376ccb7155704e19bda5d->@: write: broken pipe" runtime=io.containerd.runc.v2 May 15 12:53:43.587938 containerd[1555]: time="2025-05-15T12:53:43.587412471Z" level=info msg="StopContainer for \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" returns successfully" May 15 12:53:43.590223 containerd[1555]: time="2025-05-15T12:53:43.589693888Z" level=info msg="StopPodSandbox for \"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\"" May 15 12:53:43.590223 containerd[1555]: time="2025-05-15T12:53:43.589761538Z" level=info msg="Container to stop \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:53:43.604585 containerd[1555]: time="2025-05-15T12:53:43.604271648Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:43.607824 containerd[1555]: time="2025-05-15T12:53:43.607044091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 12:53:43.607582 systemd[1]: cri-containerd-19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303.scope: Deactivated successfully. May 15 12:53:43.616543 containerd[1555]: time="2025-05-15T12:53:43.616372920Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\" id:\"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\" pid:4304 exit_status:137 exited_at:{seconds:1747313623 nanos:615618681}" May 15 12:53:43.624792 containerd[1555]: time="2025-05-15T12:53:43.624539315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 593.88148ms" May 15 12:53:43.626889 containerd[1555]: time="2025-05-15T12:53:43.626778102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 12:53:43.631491 containerd[1555]: time="2025-05-15T12:53:43.631467216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 12:53:43.634204 containerd[1555]: time="2025-05-15T12:53:43.633986216Z" level=info msg="CreateContainer within sandbox \"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 12:53:43.649225 containerd[1555]: time="2025-05-15T12:53:43.649194794Z" level=info msg="Container 2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:43.661589 containerd[1555]: time="2025-05-15T12:53:43.661178634Z" level=info msg="CreateContainer within sandbox \"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\"" May 15 12:53:43.665624 containerd[1555]: time="2025-05-15T12:53:43.664694035Z" level=info msg="StartContainer for \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\"" May 15 12:53:43.670702 containerd[1555]: time="2025-05-15T12:53:43.670678455Z" level=info msg="connecting to shim 2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387" address="unix:///run/containerd/s/08f278585de8e08da7974b69a7ed611216e516061ce9ef231da78429c0fab9b6" protocol=ttrpc version=3 May 15 12:53:43.672075 containerd[1555]: time="2025-05-15T12:53:43.672054841Z" level=info msg="shim disconnected" id=19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303 namespace=k8s.io May 15 12:53:43.672324 containerd[1555]: time="2025-05-15T12:53:43.672307374Z" level=warning msg="cleaning up after shim disconnected" id=19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303 namespace=k8s.io May 15 12:53:43.673184 containerd[1555]: time="2025-05-15T12:53:43.673150714Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:53:43.715947 systemd[1]: Started cri-containerd-2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387.scope - libcontainer container 2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387. May 15 12:53:43.722159 containerd[1555]: time="2025-05-15T12:53:43.722011506Z" level=info msg="received exit event sandbox_id:\"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\" exit_status:137 exited_at:{seconds:1747313623 nanos:615618681}" May 15 12:53:43.726861 containerd[1555]: time="2025-05-15T12:53:43.726832973Z" level=info msg="StartContainer for \"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" returns successfully" May 15 12:53:43.830229 systemd-networkd[1458]: calic0bbf0a6347: Link DOWN May 15 12:53:43.830239 systemd-networkd[1458]: calic0bbf0a6347: Lost carrier May 15 12:53:43.909272 containerd[1555]: time="2025-05-15T12:53:43.909162677Z" level=info msg="StartContainer for \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\" returns successfully" May 15 12:53:43.950069 containerd[1555]: 2025-05-15 12:53:43.828 [INFO][5260] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" May 15 12:53:43.950069 containerd[1555]: 2025-05-15 12:53:43.829 [INFO][5260] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" iface="eth0" netns="/var/run/netns/cni-1d6f220f-9786-106e-a44b-814287536f5b" May 15 12:53:43.950069 containerd[1555]: 2025-05-15 12:53:43.829 [INFO][5260] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" iface="eth0" netns="/var/run/netns/cni-1d6f220f-9786-106e-a44b-814287536f5b" May 15 12:53:43.950069 containerd[1555]: 2025-05-15 12:53:43.836 [INFO][5260] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" after=7.225205ms iface="eth0" netns="/var/run/netns/cni-1d6f220f-9786-106e-a44b-814287536f5b" May 15 12:53:43.950069 containerd[1555]: 2025-05-15 12:53:43.836 [INFO][5260] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" May 15 12:53:43.950069 containerd[1555]: 2025-05-15 12:53:43.836 [INFO][5260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" May 15 12:53:43.950069 containerd[1555]: 2025-05-15 12:53:43.873 [INFO][5275] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" HandleID="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Workload="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:53:43.950069 containerd[1555]: 2025-05-15 12:53:43.875 [INFO][5275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:53:43.950069 containerd[1555]: 2025-05-15 12:53:43.875 [INFO][5275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:53:43.950069 containerd[1555]: 2025-05-15 12:53:43.940 [INFO][5275] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" HandleID="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Workload="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:53:43.950069 containerd[1555]: 2025-05-15 12:53:43.940 [INFO][5275] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" HandleID="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Workload="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:53:43.950069 containerd[1555]: 2025-05-15 12:53:43.943 [INFO][5275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:53:43.950069 containerd[1555]: 2025-05-15 12:53:43.947 [INFO][5260] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" May 15 12:53:43.951507 containerd[1555]: time="2025-05-15T12:53:43.951048167Z" level=info msg="TearDown network for sandbox \"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\" successfully" May 15 12:53:43.951507 containerd[1555]: time="2025-05-15T12:53:43.951073427Z" level=info msg="StopPodSandbox for \"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\" returns successfully" May 15 12:53:44.028350 kubelet[2697]: I0515 12:53:44.028261 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8" path="/var/lib/kubelet/pods/c040ea10-a6a4-4ebe-bdfa-0023f6fe49e8/volumes" May 15 12:53:44.029631 kubelet[2697]: I0515 12:53:44.028650 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76049a04-26ee-4fa9-afd5-5ad317529d27-tigera-ca-bundle\") pod \"76049a04-26ee-4fa9-afd5-5ad317529d27\" (UID: \"76049a04-26ee-4fa9-afd5-5ad317529d27\") " May 15 12:53:44.029863 kubelet[2697]: I0515 12:53:44.029786 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4wkq\" (UniqueName: \"kubernetes.io/projected/76049a04-26ee-4fa9-afd5-5ad317529d27-kube-api-access-p4wkq\") pod \"76049a04-26ee-4fa9-afd5-5ad317529d27\" (UID: \"76049a04-26ee-4fa9-afd5-5ad317529d27\") " May 15 12:53:44.036948 kubelet[2697]: I0515 12:53:44.036917 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76049a04-26ee-4fa9-afd5-5ad317529d27-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "76049a04-26ee-4fa9-afd5-5ad317529d27" (UID: "76049a04-26ee-4fa9-afd5-5ad317529d27"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 12:53:44.037624 kubelet[2697]: I0515 12:53:44.037043 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76049a04-26ee-4fa9-afd5-5ad317529d27-kube-api-access-p4wkq" (OuterVolumeSpecName: "kube-api-access-p4wkq") pod "76049a04-26ee-4fa9-afd5-5ad317529d27" (UID: "76049a04-26ee-4fa9-afd5-5ad317529d27"). InnerVolumeSpecName "kube-api-access-p4wkq". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 12:53:44.131608 kubelet[2697]: I0515 12:53:44.131572 2697 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76049a04-26ee-4fa9-afd5-5ad317529d27-tigera-ca-bundle\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:44.131608 kubelet[2697]: I0515 12:53:44.131602 2697 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p4wkq\" (UniqueName: \"kubernetes.io/projected/76049a04-26ee-4fa9-afd5-5ad317529d27-kube-api-access-p4wkq\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:44.338282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a-rootfs.mount: Deactivated successfully. May 15 12:53:44.338390 systemd[1]: var-lib-kubelet-pods-76049a04\x2d26ee\x2d4fa9\x2dafd5\x2d5ad317529d27-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. May 15 12:53:44.338477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303-rootfs.mount: Deactivated successfully. May 15 12:53:44.338546 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303-shm.mount: Deactivated successfully. May 15 12:53:44.338656 systemd[1]: run-netns-cni\x2d1d6f220f\x2d9786\x2d106e\x2da44b\x2d814287536f5b.mount: Deactivated successfully. May 15 12:53:44.338738 systemd[1]: var-lib-kubelet-pods-76049a04\x2d26ee\x2d4fa9\x2dafd5\x2d5ad317529d27-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp4wkq.mount: Deactivated successfully. May 15 12:53:44.382783 kubelet[2697]: I0515 12:53:44.382743 2697 scope.go:117] "RemoveContainer" containerID="7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a" May 15 12:53:44.390266 containerd[1555]: time="2025-05-15T12:53:44.390190533Z" level=info msg="RemoveContainer for \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\"" May 15 12:53:44.397785 kubelet[2697]: I0515 12:53:44.394539 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d86d7c9bb-95bdc" podStartSLOduration=50.292484795 podStartE2EDuration="1m5.394526533s" podCreationTimestamp="2025-05-15 12:52:39 +0000 UTC" firstStartedPulling="2025-05-15 12:53:28.526532485 +0000 UTC m=+62.587690903" lastFinishedPulling="2025-05-15 12:53:43.628574193 +0000 UTC m=+77.689732641" observedRunningTime="2025-05-15 12:53:44.393103397 +0000 UTC m=+78.454261815" watchObservedRunningTime="2025-05-15 12:53:44.394526533 +0000 UTC m=+78.455684951" May 15 12:53:44.403419 kubelet[2697]: E0515 12:53:44.403274 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:44.404844 kubelet[2697]: E0515 12:53:44.404812 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:44.411810 containerd[1555]: time="2025-05-15T12:53:44.411725789Z" level=info msg="RemoveContainer for \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" returns successfully" May 15 12:53:44.413006 kubelet[2697]: I0515 12:53:44.412907 2697 scope.go:117] "RemoveContainer" containerID="7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a" May 15 12:53:44.413621 containerd[1555]: time="2025-05-15T12:53:44.413540519Z" level=error msg="ContainerStatus for \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\": not found" May 15 12:53:44.413812 kubelet[2697]: E0515 12:53:44.413773 2697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\": not found" containerID="7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a" May 15 12:53:44.413812 kubelet[2697]: I0515 12:53:44.413802 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a"} err="failed to get container status \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a\": not found" May 15 12:53:44.417860 systemd[1]: Removed slice kubepods-besteffort-pod76049a04_26ee_4fa9_afd5_5ad317529d27.slice - libcontainer container kubepods-besteffort-pod76049a04_26ee_4fa9_afd5_5ad317529d27.slice. May 15 12:53:44.421983 kubelet[2697]: I0515 12:53:44.421574 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8gsp7" podStartSLOduration=4.42154413 podStartE2EDuration="4.42154413s" podCreationTimestamp="2025-05-15 12:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:53:44.421272577 +0000 UTC m=+78.482430995" watchObservedRunningTime="2025-05-15 12:53:44.42154413 +0000 UTC m=+78.482702548" May 15 12:53:44.457990 kubelet[2697]: I0515 12:53:44.457958 2697 memory_manager.go:355] "RemoveStaleState removing state" podUID="76049a04-26ee-4fa9-afd5-5ad317529d27" containerName="calico-kube-controllers" May 15 12:53:44.468852 systemd[1]: Created slice kubepods-besteffort-pod6ee00f24_da4f_47a1_8d9d_3727b41e0fae.slice - libcontainer container kubepods-besteffort-pod6ee00f24_da4f_47a1_8d9d_3727b41e0fae.slice. May 15 12:53:44.535677 kubelet[2697]: I0515 12:53:44.535243 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ee00f24-da4f-47a1-8d9d-3727b41e0fae-tigera-ca-bundle\") pod \"calico-kube-controllers-f6f5dc94b-x64h8\" (UID: \"6ee00f24-da4f-47a1-8d9d-3727b41e0fae\") " pod="calico-system/calico-kube-controllers-f6f5dc94b-x64h8" May 15 12:53:44.536739 kubelet[2697]: I0515 12:53:44.536178 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6g45\" (UniqueName: \"kubernetes.io/projected/6ee00f24-da4f-47a1-8d9d-3727b41e0fae-kube-api-access-k6g45\") pod \"calico-kube-controllers-f6f5dc94b-x64h8\" (UID: \"6ee00f24-da4f-47a1-8d9d-3727b41e0fae\") " pod="calico-system/calico-kube-controllers-f6f5dc94b-x64h8" May 15 12:53:44.552036 containerd[1555]: time="2025-05-15T12:53:44.551978556Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"a2c208391a59680d12fc91f0e03833fa6f5e71bcc618bc7c516f2a5e6839e03b\" pid:5333 exit_status:1 exited_at:{seconds:1747313624 nanos:551663592}" May 15 12:53:44.783101 containerd[1555]: time="2025-05-15T12:53:44.782478370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6f5dc94b-x64h8,Uid:6ee00f24-da4f-47a1-8d9d-3727b41e0fae,Namespace:calico-system,Attempt:0,}" May 15 12:53:44.954908 systemd-networkd[1458]: califc5f8e5c0e1: Link UP May 15 12:53:44.956116 systemd-networkd[1458]: califc5f8e5c0e1: Gained carrier May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.852 [INFO][5350] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-eth0 calico-kube-controllers-f6f5dc94b- calico-system 6ee00f24-da4f-47a1-8d9d-3727b41e0fae 1070 0 2025-05-15 12:53:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f6f5dc94b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-236-126-108 calico-kube-controllers-f6f5dc94b-x64h8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califc5f8e5c0e1 [] []}} ContainerID="de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" Namespace="calico-system" Pod="calico-kube-controllers-f6f5dc94b-x64h8" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-" May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.853 [INFO][5350] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" Namespace="calico-system" Pod="calico-kube-controllers-f6f5dc94b-x64h8" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-eth0" May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.912 [INFO][5361] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" HandleID="k8s-pod-network.de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" Workload="172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-eth0" May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.923 [INFO][5361] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" HandleID="k8s-pod-network.de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" Workload="172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000201290), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-126-108", "pod":"calico-kube-controllers-f6f5dc94b-x64h8", "timestamp":"2025-05-15 12:53:44.912481901 +0000 UTC"}, Hostname:"172-236-126-108", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.923 [INFO][5361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.923 [INFO][5361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.923 [INFO][5361] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-126-108' May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.925 [INFO][5361] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" host="172-236-126-108" May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.929 [INFO][5361] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-126-108" May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.933 [INFO][5361] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="172-236-126-108" May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.934 [INFO][5361] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.936 [INFO][5361] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.936 [INFO][5361] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" host="172-236-126-108" May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.938 [INFO][5361] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49 May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.941 [INFO][5361] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" host="172-236-126-108" May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.946 [INFO][5361] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.136/26] block=192.168.62.128/26 handle="k8s-pod-network.de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" host="172-236-126-108" May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.947 [INFO][5361] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.136/26] handle="k8s-pod-network.de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" host="172-236-126-108" May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.948 [INFO][5361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:53:44.971585 containerd[1555]: 2025-05-15 12:53:44.948 [INFO][5361] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.136/26] IPv6=[] ContainerID="de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" HandleID="k8s-pod-network.de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" Workload="172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-eth0" May 15 12:53:44.972151 containerd[1555]: 2025-05-15 12:53:44.951 [INFO][5350] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" Namespace="calico-system" Pod="calico-kube-controllers-f6f5dc94b-x64h8" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-eth0", GenerateName:"calico-kube-controllers-f6f5dc94b-", Namespace:"calico-system", SelfLink:"", UID:"6ee00f24-da4f-47a1-8d9d-3727b41e0fae", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 53, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f6f5dc94b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"", Pod:"calico-kube-controllers-f6f5dc94b-x64h8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc5f8e5c0e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:44.972151 containerd[1555]: 2025-05-15 12:53:44.951 [INFO][5350] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.136/32] ContainerID="de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" Namespace="calico-system" Pod="calico-kube-controllers-f6f5dc94b-x64h8" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-eth0" May 15 12:53:44.972151 containerd[1555]: 2025-05-15 12:53:44.951 [INFO][5350] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc5f8e5c0e1 ContainerID="de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" Namespace="calico-system" Pod="calico-kube-controllers-f6f5dc94b-x64h8" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-eth0" May 15 12:53:44.972151 containerd[1555]: 2025-05-15 12:53:44.956 [INFO][5350] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" Namespace="calico-system" Pod="calico-kube-controllers-f6f5dc94b-x64h8" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-eth0" May 15 12:53:44.972151 containerd[1555]: 2025-05-15 12:53:44.957 [INFO][5350] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" Namespace="calico-system" Pod="calico-kube-controllers-f6f5dc94b-x64h8" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-eth0", GenerateName:"calico-kube-controllers-f6f5dc94b-", Namespace:"calico-system", SelfLink:"", UID:"6ee00f24-da4f-47a1-8d9d-3727b41e0fae", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 53, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f6f5dc94b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49", Pod:"calico-kube-controllers-f6f5dc94b-x64h8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc5f8e5c0e1", MAC:"b6:6d:1d:b6:04:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:44.972151 containerd[1555]: 2025-05-15 12:53:44.967 [INFO][5350] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" Namespace="calico-system" Pod="calico-kube-controllers-f6f5dc94b-x64h8" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--f6f5dc94b--x64h8-eth0" May 15 12:53:44.999578 containerd[1555]: time="2025-05-15T12:53:44.999478862Z" level=info msg="connecting to shim de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49" address="unix:///run/containerd/s/8ca44ed22609ccdf93350225420ab051462a8dec6cb51589697666ac4f6bde8a" namespace=k8s.io protocol=ttrpc version=3 May 15 12:53:45.027822 systemd[1]: Started cri-containerd-de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49.scope - libcontainer container de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49. May 15 12:53:45.078294 containerd[1555]: time="2025-05-15T12:53:45.078258415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f6f5dc94b-x64h8,Uid:6ee00f24-da4f-47a1-8d9d-3727b41e0fae,Namespace:calico-system,Attempt:0,} returns sandbox id \"de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49\"" May 15 12:53:45.092257 containerd[1555]: time="2025-05-15T12:53:45.092226110Z" level=info msg="CreateContainer within sandbox \"de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 12:53:45.098244 containerd[1555]: time="2025-05-15T12:53:45.098200366Z" level=info msg="Container 27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:45.102958 containerd[1555]: time="2025-05-15T12:53:45.102922769Z" level=info msg="CreateContainer within sandbox \"de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\"" May 15 12:53:45.104013 containerd[1555]: time="2025-05-15T12:53:45.103966710Z" level=info msg="StartContainer for \"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\"" May 15 12:53:45.105728 containerd[1555]: time="2025-05-15T12:53:45.105229574Z" level=info msg="connecting to shim 27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0" address="unix:///run/containerd/s/8ca44ed22609ccdf93350225420ab051462a8dec6cb51589697666ac4f6bde8a" protocol=ttrpc version=3 May 15 12:53:45.127797 systemd[1]: Started cri-containerd-27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0.scope - libcontainer container 27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0. May 15 12:53:45.188194 containerd[1555]: time="2025-05-15T12:53:45.188165623Z" level=info msg="StartContainer for \"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" returns successfully" May 15 12:53:45.420415 kubelet[2697]: I0515 12:53:45.420198 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:53:45.432621 kubelet[2697]: E0515 12:53:45.422253 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:45.439612 kubelet[2697]: I0515 12:53:45.439294 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f6f5dc94b-x64h8" podStartSLOduration=1.439280227 podStartE2EDuration="1.439280227s" podCreationTimestamp="2025-05-15 12:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:53:45.436463625 +0000 UTC m=+79.497622043" watchObservedRunningTime="2025-05-15 12:53:45.439280227 +0000 UTC m=+79.500438645" May 15 12:53:45.512750 containerd[1555]: time="2025-05-15T12:53:45.512707481Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"555cca938a48f5c79919fb7b1476c9e8df021738de20431510d2fa447a371281\" pid:5520 exit_status:1 exited_at:{seconds:1747313625 nanos:505743263}" May 15 12:53:45.529446 containerd[1555]: time="2025-05-15T12:53:45.529397836Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1747313623 nanos:615618681}" May 15 12:53:45.720878 containerd[1555]: time="2025-05-15T12:53:45.719885556Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"013cce1e3a26cb10aa27539846ac412aae75e95f6dfddcf52116b966fb088c0d\" pid:5537 exit_status:1 exited_at:{seconds:1747313625 nanos:717769943}" May 15 12:53:45.984870 systemd-networkd[1458]: califc5f8e5c0e1: Gained IPv6LL May 15 12:53:46.029725 kubelet[2697]: I0515 12:53:46.029696 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76049a04-26ee-4fa9-afd5-5ad317529d27" path="/var/lib/kubelet/pods/76049a04-26ee-4fa9-afd5-5ad317529d27/volumes" May 15 12:53:46.106854 containerd[1555]: time="2025-05-15T12:53:46.106803454Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:46.107965 containerd[1555]: time="2025-05-15T12:53:46.107936156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 12:53:46.108965 containerd[1555]: time="2025-05-15T12:53:46.108930377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.477328869s" May 15 12:53:46.108965 containerd[1555]: time="2025-05-15T12:53:46.108959677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 12:53:46.111274 containerd[1555]: time="2025-05-15T12:53:46.111209312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 12:53:46.114696 containerd[1555]: time="2025-05-15T12:53:46.114661459Z" level=info msg="CreateContainer within sandbox \"3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 12:53:46.130187 containerd[1555]: time="2025-05-15T12:53:46.128705630Z" level=info msg="Container fd5bab077448994b4638780bb86d85128fb6ccbcddadf5473b1fceaed62567de: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:46.135421 containerd[1555]: time="2025-05-15T12:53:46.135370532Z" level=info msg="CreateContainer within sandbox \"3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fd5bab077448994b4638780bb86d85128fb6ccbcddadf5473b1fceaed62567de\"" May 15 12:53:46.137118 containerd[1555]: time="2025-05-15T12:53:46.137076171Z" level=info msg="StartContainer for \"fd5bab077448994b4638780bb86d85128fb6ccbcddadf5473b1fceaed62567de\"" May 15 12:53:46.137824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3945675588.mount: Deactivated successfully. May 15 12:53:46.138582 containerd[1555]: time="2025-05-15T12:53:46.138549597Z" level=info msg="connecting to shim fd5bab077448994b4638780bb86d85128fb6ccbcddadf5473b1fceaed62567de" address="unix:///run/containerd/s/8a8311e0e294f14a66e2dcb56fb5c532b9bb94918ba2ae6ad409ed3933fe23ad" protocol=ttrpc version=3 May 15 12:53:46.188430 systemd[1]: Started cri-containerd-fd5bab077448994b4638780bb86d85128fb6ccbcddadf5473b1fceaed62567de.scope - libcontainer container fd5bab077448994b4638780bb86d85128fb6ccbcddadf5473b1fceaed62567de. May 15 12:53:46.263863 containerd[1555]: time="2025-05-15T12:53:46.263760777Z" level=info msg="StartContainer for \"fd5bab077448994b4638780bb86d85128fb6ccbcddadf5473b1fceaed62567de\" returns successfully" May 15 12:53:46.440752 kubelet[2697]: I0515 12:53:46.440087 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86b45b489c-mh8vn" podStartSLOduration=49.650513597 podStartE2EDuration="1m6.440071789s" podCreationTimestamp="2025-05-15 12:52:40 +0000 UTC" firstStartedPulling="2025-05-15 12:53:29.321243835 +0000 UTC m=+63.382402253" lastFinishedPulling="2025-05-15 12:53:46.110802027 +0000 UTC m=+80.171960445" observedRunningTime="2025-05-15 12:53:46.439920738 +0000 UTC m=+80.501079156" watchObservedRunningTime="2025-05-15 12:53:46.440071789 +0000 UTC m=+80.501230207" May 15 12:53:46.488924 containerd[1555]: time="2025-05-15T12:53:46.488850456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"60263d522745d92cfe91d28a0a5ba7de75eb96939a188604d81c40e932d8e7e3\" pid:5741 exit_status:1 exited_at:{seconds:1747313626 nanos:488302820}" May 15 12:53:47.428143 kubelet[2697]: I0515 12:53:47.428111 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:53:49.026278 kubelet[2697]: E0515 12:53:49.026240 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:51.011847 kubelet[2697]: I0515 12:53:51.011424 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:53:51.060567 kubelet[2697]: I0515 12:53:51.060512 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:53:51.062410 containerd[1555]: time="2025-05-15T12:53:51.062205935Z" level=info msg="StopContainer for \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\" with timeout 30 (s)" May 15 12:53:51.063271 containerd[1555]: time="2025-05-15T12:53:51.063205775Z" level=info msg="Stop container \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\" with signal terminated" May 15 12:53:51.116864 systemd[1]: cri-containerd-2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387.scope: Deactivated successfully. May 15 12:53:51.119404 systemd[1]: Created slice kubepods-besteffort-pod7acb32cb_e495_4dce_ac2d_b0534c38a9f7.slice - libcontainer container kubepods-besteffort-pod7acb32cb_e495_4dce_ac2d_b0534c38a9f7.slice. May 15 12:53:51.124422 containerd[1555]: time="2025-05-15T12:53:51.122877518Z" level=info msg="received exit event container_id:\"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\" id:\"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\" pid:5239 exit_status:1 exited_at:{seconds:1747313631 nanos:122219022}" May 15 12:53:51.126086 containerd[1555]: time="2025-05-15T12:53:51.125857496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\" id:\"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\" pid:5239 exit_status:1 exited_at:{seconds:1747313631 nanos:122219022}" May 15 12:53:51.168081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387-rootfs.mount: Deactivated successfully. May 15 12:53:51.178164 containerd[1555]: time="2025-05-15T12:53:51.177956588Z" level=info msg="StopContainer for \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\" returns successfully" May 15 12:53:51.180037 containerd[1555]: time="2025-05-15T12:53:51.179970787Z" level=info msg="StopPodSandbox for \"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\"" May 15 12:53:51.180192 containerd[1555]: time="2025-05-15T12:53:51.180145899Z" level=info msg="Container to stop \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:53:51.190412 systemd[1]: cri-containerd-52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755.scope: Deactivated successfully. May 15 12:53:51.192047 kubelet[2697]: I0515 12:53:51.191666 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7acb32cb-e495-4dce-ac2d-b0534c38a9f7-calico-apiserver-certs\") pod \"calico-apiserver-86b45b489c-h9xn4\" (UID: \"7acb32cb-e495-4dce-ac2d-b0534c38a9f7\") " pod="calico-apiserver/calico-apiserver-86b45b489c-h9xn4" May 15 12:53:51.192047 kubelet[2697]: I0515 12:53:51.191947 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmp85\" (UniqueName: \"kubernetes.io/projected/7acb32cb-e495-4dce-ac2d-b0534c38a9f7-kube-api-access-cmp85\") pod \"calico-apiserver-86b45b489c-h9xn4\" (UID: \"7acb32cb-e495-4dce-ac2d-b0534c38a9f7\") " pod="calico-apiserver/calico-apiserver-86b45b489c-h9xn4" May 15 12:53:51.196421 containerd[1555]: time="2025-05-15T12:53:51.196366282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\" id:\"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\" pid:4472 exit_status:137 exited_at:{seconds:1747313631 nanos:192529106}" May 15 12:53:51.223515 containerd[1555]: time="2025-05-15T12:53:51.223484348Z" level=info msg="shim disconnected" id=52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755 namespace=k8s.io May 15 12:53:51.225820 containerd[1555]: time="2025-05-15T12:53:51.225799950Z" level=warning msg="cleaning up after shim disconnected" id=52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755 namespace=k8s.io May 15 12:53:51.225941 containerd[1555]: time="2025-05-15T12:53:51.225912421Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:53:51.226326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755-rootfs.mount: Deactivated successfully. May 15 12:53:51.229109 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755-shm.mount: Deactivated successfully. May 15 12:53:51.231658 containerd[1555]: time="2025-05-15T12:53:51.225704859Z" level=info msg="received exit event sandbox_id:\"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\" exit_status:137 exited_at:{seconds:1747313631 nanos:192529106}" May 15 12:53:51.289879 systemd-networkd[1458]: cali7b382c0b257: Link DOWN May 15 12:53:51.289888 systemd-networkd[1458]: cali7b382c0b257: Lost carrier May 15 12:53:51.388167 containerd[1555]: 2025-05-15 12:53:51.287 [INFO][5816] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" May 15 12:53:51.388167 containerd[1555]: 2025-05-15 12:53:51.287 [INFO][5816] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" iface="eth0" netns="/var/run/netns/cni-f093d687-78c1-752e-7ca8-ed0fc0724b16" May 15 12:53:51.388167 containerd[1555]: 2025-05-15 12:53:51.288 [INFO][5816] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" iface="eth0" netns="/var/run/netns/cni-f093d687-78c1-752e-7ca8-ed0fc0724b16" May 15 12:53:51.388167 containerd[1555]: 2025-05-15 12:53:51.295 [INFO][5816] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" after=7.281819ms iface="eth0" netns="/var/run/netns/cni-f093d687-78c1-752e-7ca8-ed0fc0724b16" May 15 12:53:51.388167 containerd[1555]: 2025-05-15 12:53:51.296 [INFO][5816] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" May 15 12:53:51.388167 containerd[1555]: 2025-05-15 12:53:51.296 [INFO][5816] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" May 15 12:53:51.388167 containerd[1555]: 2025-05-15 12:53:51.345 [INFO][5834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" HandleID="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:53:51.388167 containerd[1555]: 2025-05-15 12:53:51.345 [INFO][5834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:53:51.388167 containerd[1555]: 2025-05-15 12:53:51.345 [INFO][5834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:53:51.388167 containerd[1555]: 2025-05-15 12:53:51.381 [INFO][5834] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" HandleID="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:53:51.388167 containerd[1555]: 2025-05-15 12:53:51.381 [INFO][5834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" HandleID="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:53:51.388167 containerd[1555]: 2025-05-15 12:53:51.382 [INFO][5834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:53:51.388167 containerd[1555]: 2025-05-15 12:53:51.385 [INFO][5816] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" May 15 12:53:51.389102 containerd[1555]: time="2025-05-15T12:53:51.388974741Z" level=info msg="TearDown network for sandbox \"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\" successfully" May 15 12:53:51.389102 containerd[1555]: time="2025-05-15T12:53:51.389058751Z" level=info msg="StopPodSandbox for \"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\" returns successfully" May 15 12:53:51.429839 containerd[1555]: time="2025-05-15T12:53:51.429776056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b45b489c-h9xn4,Uid:7acb32cb-e495-4dce-ac2d-b0534c38a9f7,Namespace:calico-apiserver,Attempt:0,}" May 15 12:53:51.442203 kubelet[2697]: I0515 12:53:51.442158 2697 scope.go:117] "RemoveContainer" containerID="2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387" May 15 12:53:51.447405 containerd[1555]: time="2025-05-15T12:53:51.447372752Z" level=info msg="RemoveContainer for \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\"" May 15 12:53:51.452166 containerd[1555]: time="2025-05-15T12:53:51.452073006Z" level=info msg="RemoveContainer for \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\" returns successfully" May 15 12:53:51.452350 kubelet[2697]: I0515 12:53:51.452319 2697 scope.go:117] "RemoveContainer" containerID="2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387" May 15 12:53:51.452682 containerd[1555]: time="2025-05-15T12:53:51.452639472Z" level=error msg="ContainerStatus for \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\": not found" May 15 12:53:51.453448 kubelet[2697]: E0515 12:53:51.453410 2697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\": not found" containerID="2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387" May 15 12:53:51.453495 kubelet[2697]: I0515 12:53:51.453448 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387"} err="failed to get container status \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387\": not found" May 15 12:53:51.495314 kubelet[2697]: I0515 12:53:51.495271 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngsng\" (UniqueName: \"kubernetes.io/projected/72f594de-0445-4674-8b32-ccb3305262a8-kube-api-access-ngsng\") pod \"72f594de-0445-4674-8b32-ccb3305262a8\" (UID: \"72f594de-0445-4674-8b32-ccb3305262a8\") " May 15 12:53:51.495314 kubelet[2697]: I0515 12:53:51.495328 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/72f594de-0445-4674-8b32-ccb3305262a8-calico-apiserver-certs\") pod \"72f594de-0445-4674-8b32-ccb3305262a8\" (UID: \"72f594de-0445-4674-8b32-ccb3305262a8\") " May 15 12:53:51.501201 kubelet[2697]: I0515 12:53:51.501138 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72f594de-0445-4674-8b32-ccb3305262a8-kube-api-access-ngsng" (OuterVolumeSpecName: "kube-api-access-ngsng") pod "72f594de-0445-4674-8b32-ccb3305262a8" (UID: "72f594de-0445-4674-8b32-ccb3305262a8"). InnerVolumeSpecName "kube-api-access-ngsng". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 12:53:51.505079 kubelet[2697]: I0515 12:53:51.505043 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72f594de-0445-4674-8b32-ccb3305262a8-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "72f594de-0445-4674-8b32-ccb3305262a8" (UID: "72f594de-0445-4674-8b32-ccb3305262a8"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 12:53:51.576495 systemd-networkd[1458]: cali0879a22d318: Link UP May 15 12:53:51.577888 systemd-networkd[1458]: cali0879a22d318: Gained carrier May 15 12:53:51.596160 kubelet[2697]: I0515 12:53:51.596096 2697 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/72f594de-0445-4674-8b32-ccb3305262a8-calico-apiserver-certs\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:51.596160 kubelet[2697]: I0515 12:53:51.596136 2697 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ngsng\" (UniqueName: \"kubernetes.io/projected/72f594de-0445-4674-8b32-ccb3305262a8-kube-api-access-ngsng\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.483 [INFO][5854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-eth0 calico-apiserver-86b45b489c- calico-apiserver 7acb32cb-e495-4dce-ac2d-b0534c38a9f7 1133 0 2025-05-15 12:53:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86b45b489c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-126-108 calico-apiserver-86b45b489c-h9xn4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0879a22d318 [] []}} ContainerID="ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-h9xn4" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-" May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.483 [INFO][5854] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-h9xn4" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-eth0" May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.520 [INFO][5868] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" HandleID="k8s-pod-network.ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" Workload="172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-eth0" May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.534 [INFO][5868] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" HandleID="k8s-pod-network.ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" Workload="172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-236-126-108", "pod":"calico-apiserver-86b45b489c-h9xn4", "timestamp":"2025-05-15 12:53:51.520399232 +0000 UTC"}, Hostname:"172-236-126-108", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.534 [INFO][5868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.534 [INFO][5868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.535 [INFO][5868] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-126-108' May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.538 [INFO][5868] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" host="172-236-126-108" May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.545 [INFO][5868] ipam/ipam.go 372: Looking up existing affinities for host host="172-236-126-108" May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.550 [INFO][5868] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="172-236-126-108" May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.552 [INFO][5868] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.555 [INFO][5868] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="172-236-126-108" May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.555 [INFO][5868] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" host="172-236-126-108" May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.556 [INFO][5868] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5 May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.561 [INFO][5868] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" host="172-236-126-108" May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.568 [INFO][5868] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.137/26] block=192.168.62.128/26 handle="k8s-pod-network.ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" host="172-236-126-108" May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.568 [INFO][5868] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.137/26] handle="k8s-pod-network.ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" host="172-236-126-108" May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.568 [INFO][5868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:53:51.597129 containerd[1555]: 2025-05-15 12:53:51.568 [INFO][5868] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.137/26] IPv6=[] ContainerID="ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" HandleID="k8s-pod-network.ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" Workload="172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-eth0" May 15 12:53:51.598480 containerd[1555]: 2025-05-15 12:53:51.572 [INFO][5854] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-h9xn4" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-eth0", GenerateName:"calico-apiserver-86b45b489c-", Namespace:"calico-apiserver", SelfLink:"", UID:"7acb32cb-e495-4dce-ac2d-b0534c38a9f7", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 53, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86b45b489c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"", Pod:"calico-apiserver-86b45b489c-h9xn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0879a22d318", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:51.598480 containerd[1555]: 2025-05-15 12:53:51.572 [INFO][5854] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.137/32] ContainerID="ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-h9xn4" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-eth0" May 15 12:53:51.598480 containerd[1555]: 2025-05-15 12:53:51.572 [INFO][5854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0879a22d318 ContainerID="ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-h9xn4" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-eth0" May 15 12:53:51.598480 containerd[1555]: 2025-05-15 12:53:51.578 [INFO][5854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-h9xn4" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-eth0" May 15 12:53:51.598480 containerd[1555]: 2025-05-15 12:53:51.579 [INFO][5854] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-h9xn4" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-eth0", GenerateName:"calico-apiserver-86b45b489c-", Namespace:"calico-apiserver", SelfLink:"", UID:"7acb32cb-e495-4dce-ac2d-b0534c38a9f7", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 12, 53, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86b45b489c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-126-108", ContainerID:"ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5", Pod:"calico-apiserver-86b45b489c-h9xn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0879a22d318", MAC:"9e:f1:6c:86:00:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 12:53:51.598480 containerd[1555]: 2025-05-15 12:53:51.592 [INFO][5854] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" Namespace="calico-apiserver" Pod="calico-apiserver-86b45b489c-h9xn4" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--86b45b489c--h9xn4-eth0" May 15 12:53:51.622219 containerd[1555]: time="2025-05-15T12:53:51.622164273Z" level=info msg="connecting to shim ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5" address="unix:///run/containerd/s/1827e92b357d5f3df569ee87bb9b9f9e646fa4d78958caf1ccc342d70c522319" namespace=k8s.io protocol=ttrpc version=3 May 15 12:53:51.654849 systemd[1]: Started cri-containerd-ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5.scope - libcontainer container ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5. May 15 12:53:51.709275 containerd[1555]: time="2025-05-15T12:53:51.709228015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86b45b489c-h9xn4,Uid:7acb32cb-e495-4dce-ac2d-b0534c38a9f7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5\"" May 15 12:53:51.718515 containerd[1555]: time="2025-05-15T12:53:51.717494543Z" level=info msg="CreateContainer within sandbox \"ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 12:53:51.724615 containerd[1555]: time="2025-05-15T12:53:51.724592650Z" level=info msg="Container df30cb51887a8532ba15e3887ddf27015e3f569e4e10bd6d71e3c10c7b625b98: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:51.731208 containerd[1555]: time="2025-05-15T12:53:51.731090031Z" level=info msg="CreateContainer within sandbox \"ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"df30cb51887a8532ba15e3887ddf27015e3f569e4e10bd6d71e3c10c7b625b98\"" May 15 12:53:51.733877 containerd[1555]: time="2025-05-15T12:53:51.733840607Z" level=info msg="StartContainer for \"df30cb51887a8532ba15e3887ddf27015e3f569e4e10bd6d71e3c10c7b625b98\"" May 15 12:53:51.735965 containerd[1555]: time="2025-05-15T12:53:51.735926177Z" level=info msg="connecting to shim df30cb51887a8532ba15e3887ddf27015e3f569e4e10bd6d71e3c10c7b625b98" address="unix:///run/containerd/s/1827e92b357d5f3df569ee87bb9b9f9e646fa4d78958caf1ccc342d70c522319" protocol=ttrpc version=3 May 15 12:53:51.752247 systemd[1]: Removed slice kubepods-besteffort-pod72f594de_0445_4674_8b32_ccb3305262a8.slice - libcontainer container kubepods-besteffort-pod72f594de_0445_4674_8b32_ccb3305262a8.slice. May 15 12:53:51.769465 systemd[1]: Started cri-containerd-df30cb51887a8532ba15e3887ddf27015e3f569e4e10bd6d71e3c10c7b625b98.scope - libcontainer container df30cb51887a8532ba15e3887ddf27015e3f569e4e10bd6d71e3c10c7b625b98. May 15 12:53:51.830864 containerd[1555]: time="2025-05-15T12:53:51.830707202Z" level=info msg="StartContainer for \"df30cb51887a8532ba15e3887ddf27015e3f569e4e10bd6d71e3c10c7b625b98\" returns successfully" May 15 12:53:52.029860 kubelet[2697]: I0515 12:53:52.029808 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72f594de-0445-4674-8b32-ccb3305262a8" path="/var/lib/kubelet/pods/72f594de-0445-4674-8b32-ccb3305262a8/volumes" May 15 12:53:52.175267 systemd[1]: run-netns-cni\x2df093d687\x2d78c1\x2d752e\x2d7ca8\x2ded0fc0724b16.mount: Deactivated successfully. May 15 12:53:52.175901 systemd[1]: var-lib-kubelet-pods-72f594de\x2d0445\x2d4674\x2d8b32\x2dccb3305262a8-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 12:53:52.176004 systemd[1]: var-lib-kubelet-pods-72f594de\x2d0445\x2d4674\x2d8b32\x2dccb3305262a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dngsng.mount: Deactivated successfully. May 15 12:53:52.464673 kubelet[2697]: I0515 12:53:52.464255 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86b45b489c-h9xn4" podStartSLOduration=1.464237891 podStartE2EDuration="1.464237891s" podCreationTimestamp="2025-05-15 12:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 12:53:52.463895707 +0000 UTC m=+86.525054125" watchObservedRunningTime="2025-05-15 12:53:52.464237891 +0000 UTC m=+86.525396319" May 15 12:53:52.617635 kubelet[2697]: I0515 12:53:52.617584 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 12:53:52.620665 containerd[1555]: time="2025-05-15T12:53:52.620336816Z" level=info msg="StopContainer for \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\" with timeout 30 (s)" May 15 12:53:52.622043 containerd[1555]: time="2025-05-15T12:53:52.621896661Z" level=info msg="Stop container \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\" with signal terminated" May 15 12:53:52.758775 systemd[1]: cri-containerd-51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd.scope: Deactivated successfully. May 15 12:53:52.763429 containerd[1555]: time="2025-05-15T12:53:52.763369482Z" level=info msg="received exit event container_id:\"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\" id:\"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\" pid:4617 exit_status:1 exited_at:{seconds:1747313632 nanos:762688146}" May 15 12:53:52.764877 containerd[1555]: time="2025-05-15T12:53:52.764833295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\" id:\"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\" pid:4617 exit_status:1 exited_at:{seconds:1747313632 nanos:762688146}" May 15 12:53:52.793317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd-rootfs.mount: Deactivated successfully. May 15 12:53:52.800066 containerd[1555]: time="2025-05-15T12:53:52.800013389Z" level=info msg="StopContainer for \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\" returns successfully" May 15 12:53:52.801362 containerd[1555]: time="2025-05-15T12:53:52.801020268Z" level=info msg="StopPodSandbox for \"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\"" May 15 12:53:52.801362 containerd[1555]: time="2025-05-15T12:53:52.801088819Z" level=info msg="Container to stop \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 12:53:52.809811 systemd[1]: cri-containerd-d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598.scope: Deactivated successfully. May 15 12:53:52.814855 containerd[1555]: time="2025-05-15T12:53:52.814824075Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\" id:\"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\" pid:4021 exit_status:137 exited_at:{seconds:1747313632 nanos:811778497}" May 15 12:53:52.854208 containerd[1555]: time="2025-05-15T12:53:52.854140427Z" level=info msg="shim disconnected" id=d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598 namespace=k8s.io May 15 12:53:52.854208 containerd[1555]: time="2025-05-15T12:53:52.854174057Z" level=warning msg="cleaning up after shim disconnected" id=d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598 namespace=k8s.io May 15 12:53:52.855327 containerd[1555]: time="2025-05-15T12:53:52.854181957Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 12:53:52.856390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598-rootfs.mount: Deactivated successfully. May 15 12:53:52.876860 containerd[1555]: time="2025-05-15T12:53:52.876815775Z" level=info msg="received exit event sandbox_id:\"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\" exit_status:137 exited_at:{seconds:1747313632 nanos:811778497}" May 15 12:53:52.882011 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598-shm.mount: Deactivated successfully. May 15 12:53:52.964918 systemd-networkd[1458]: cali078dfec57ab: Link DOWN May 15 12:53:52.964953 systemd-networkd[1458]: cali078dfec57ab: Lost carrier May 15 12:53:53.087707 containerd[1555]: 2025-05-15 12:53:52.962 [INFO][6047] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" May 15 12:53:53.087707 containerd[1555]: 2025-05-15 12:53:52.963 [INFO][6047] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" iface="eth0" netns="/var/run/netns/cni-13a8220b-45ed-5f20-2b05-e949bdc363d5" May 15 12:53:53.087707 containerd[1555]: 2025-05-15 12:53:52.963 [INFO][6047] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" iface="eth0" netns="/var/run/netns/cni-13a8220b-45ed-5f20-2b05-e949bdc363d5" May 15 12:53:53.087707 containerd[1555]: 2025-05-15 12:53:52.970 [INFO][6047] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" after=7.307767ms iface="eth0" netns="/var/run/netns/cni-13a8220b-45ed-5f20-2b05-e949bdc363d5" May 15 12:53:53.087707 containerd[1555]: 2025-05-15 12:53:52.970 [INFO][6047] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" May 15 12:53:53.087707 containerd[1555]: 2025-05-15 12:53:52.970 [INFO][6047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" May 15 12:53:53.087707 containerd[1555]: 2025-05-15 12:53:53.025 [INFO][6055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" HandleID="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:53:53.087707 containerd[1555]: 2025-05-15 12:53:53.026 [INFO][6055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:53:53.087707 containerd[1555]: 2025-05-15 12:53:53.026 [INFO][6055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:53:53.087707 containerd[1555]: 2025-05-15 12:53:53.080 [INFO][6055] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" HandleID="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:53:53.087707 containerd[1555]: 2025-05-15 12:53:53.080 [INFO][6055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" HandleID="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:53:53.087707 containerd[1555]: 2025-05-15 12:53:53.082 [INFO][6055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:53:53.087707 containerd[1555]: 2025-05-15 12:53:53.085 [INFO][6047] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" May 15 12:53:53.091926 systemd-networkd[1458]: cali0879a22d318: Gained IPv6LL May 15 12:53:53.093930 systemd[1]: run-netns-cni\x2d13a8220b\x2d45ed\x2d5f20\x2d2b05\x2de949bdc363d5.mount: Deactivated successfully. May 15 12:53:53.096337 containerd[1555]: time="2025-05-15T12:53:53.096284252Z" level=info msg="TearDown network for sandbox \"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\" successfully" May 15 12:53:53.096528 containerd[1555]: time="2025-05-15T12:53:53.096460564Z" level=info msg="StopPodSandbox for \"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\" returns successfully" May 15 12:53:53.195935 containerd[1555]: time="2025-05-15T12:53:53.195865094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:53.196642 containerd[1555]: time="2025-05-15T12:53:53.196583821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 15 12:53:53.197254 containerd[1555]: time="2025-05-15T12:53:53.197175126Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:53.199453 containerd[1555]: time="2025-05-15T12:53:53.199370376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 12:53:53.201766 containerd[1555]: time="2025-05-15T12:53:53.201734567Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 7.090493715s" May 15 12:53:53.201840 containerd[1555]: time="2025-05-15T12:53:53.201770467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 15 12:53:53.204714 containerd[1555]: time="2025-05-15T12:53:53.204654393Z" level=info msg="CreateContainer within sandbox \"0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 12:53:53.207773 kubelet[2697]: I0515 12:53:53.206921 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtdm7\" (UniqueName: \"kubernetes.io/projected/9698ee50-755f-43e4-a451-771820b74a00-kube-api-access-rtdm7\") pod \"9698ee50-755f-43e4-a451-771820b74a00\" (UID: \"9698ee50-755f-43e4-a451-771820b74a00\") " May 15 12:53:53.207773 kubelet[2697]: I0515 12:53:53.206973 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9698ee50-755f-43e4-a451-771820b74a00-calico-apiserver-certs\") pod \"9698ee50-755f-43e4-a451-771820b74a00\" (UID: \"9698ee50-755f-43e4-a451-771820b74a00\") " May 15 12:53:53.213859 containerd[1555]: time="2025-05-15T12:53:53.213790645Z" level=info msg="Container 4b9c0c21eecab2eaee93cfaeea448122a938d6ce55d2bf5fa8f7a897545a9f8f: CDI devices from CRI Config.CDIDevices: []" May 15 12:53:53.214062 kubelet[2697]: I0515 12:53:53.214037 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9698ee50-755f-43e4-a451-771820b74a00-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "9698ee50-755f-43e4-a451-771820b74a00" (UID: "9698ee50-755f-43e4-a451-771820b74a00"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 12:53:53.217278 kubelet[2697]: I0515 12:53:53.216450 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9698ee50-755f-43e4-a451-771820b74a00-kube-api-access-rtdm7" (OuterVolumeSpecName: "kube-api-access-rtdm7") pod "9698ee50-755f-43e4-a451-771820b74a00" (UID: "9698ee50-755f-43e4-a451-771820b74a00"). InnerVolumeSpecName "kube-api-access-rtdm7". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 12:53:53.216528 systemd[1]: var-lib-kubelet-pods-9698ee50\x2d755f\x2d43e4\x2da451\x2d771820b74a00-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 15 12:53:53.216774 systemd[1]: var-lib-kubelet-pods-9698ee50\x2d755f\x2d43e4\x2da451\x2d771820b74a00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drtdm7.mount: Deactivated successfully. May 15 12:53:53.235111 containerd[1555]: time="2025-05-15T12:53:53.235063766Z" level=info msg="CreateContainer within sandbox \"0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4b9c0c21eecab2eaee93cfaeea448122a938d6ce55d2bf5fa8f7a897545a9f8f\"" May 15 12:53:53.237092 containerd[1555]: time="2025-05-15T12:53:53.237067964Z" level=info msg="StartContainer for \"4b9c0c21eecab2eaee93cfaeea448122a938d6ce55d2bf5fa8f7a897545a9f8f\"" May 15 12:53:53.240563 containerd[1555]: time="2025-05-15T12:53:53.240352173Z" level=info msg="connecting to shim 4b9c0c21eecab2eaee93cfaeea448122a938d6ce55d2bf5fa8f7a897545a9f8f" address="unix:///run/containerd/s/4080a192d1ac6026d9f2c3ce752d67b0e9498b55f2e474ddcb835928b6e83335" protocol=ttrpc version=3 May 15 12:53:53.275894 systemd[1]: Started cri-containerd-4b9c0c21eecab2eaee93cfaeea448122a938d6ce55d2bf5fa8f7a897545a9f8f.scope - libcontainer container 4b9c0c21eecab2eaee93cfaeea448122a938d6ce55d2bf5fa8f7a897545a9f8f. May 15 12:53:53.308774 kubelet[2697]: I0515 12:53:53.308594 2697 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rtdm7\" (UniqueName: \"kubernetes.io/projected/9698ee50-755f-43e4-a451-771820b74a00-kube-api-access-rtdm7\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:53.309733 kubelet[2697]: I0515 12:53:53.309442 2697 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9698ee50-755f-43e4-a451-771820b74a00-calico-apiserver-certs\") on node \"172-236-126-108\" DevicePath \"\"" May 15 12:53:53.327259 containerd[1555]: time="2025-05-15T12:53:53.327190521Z" level=info msg="StartContainer for \"4b9c0c21eecab2eaee93cfaeea448122a938d6ce55d2bf5fa8f7a897545a9f8f\" returns successfully" May 15 12:53:53.465352 kubelet[2697]: I0515 12:53:53.465202 2697 scope.go:117] "RemoveContainer" containerID="51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd" May 15 12:53:53.468846 containerd[1555]: time="2025-05-15T12:53:53.468793660Z" level=info msg="RemoveContainer for \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\"" May 15 12:53:53.481720 containerd[1555]: time="2025-05-15T12:53:53.481627115Z" level=info msg="RemoveContainer for \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\" returns successfully" May 15 12:53:53.483499 kubelet[2697]: I0515 12:53:53.483429 2697 scope.go:117] "RemoveContainer" containerID="51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd" May 15 12:53:53.488349 systemd[1]: Removed slice kubepods-besteffort-pod9698ee50_755f_43e4_a451_771820b74a00.slice - libcontainer container kubepods-besteffort-pod9698ee50_755f_43e4_a451_771820b74a00.slice. May 15 12:53:53.491573 containerd[1555]: time="2025-05-15T12:53:53.491448983Z" level=error msg="ContainerStatus for \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\": not found" May 15 12:53:53.492797 kubelet[2697]: E0515 12:53:53.492547 2697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\": not found" containerID="51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd" May 15 12:53:53.492797 kubelet[2697]: I0515 12:53:53.492629 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd"} err="failed to get container status \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd\": not found" May 15 12:53:53.504644 kubelet[2697]: I0515 12:53:53.504589 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nq42m" podStartSLOduration=48.637319948 podStartE2EDuration="1m14.50446799s" podCreationTimestamp="2025-05-15 12:52:39 +0000 UTC" firstStartedPulling="2025-05-15 12:53:27.335534843 +0000 UTC m=+61.396693261" lastFinishedPulling="2025-05-15 12:53:53.202682885 +0000 UTC m=+87.263841303" observedRunningTime="2025-05-15 12:53:53.476801182 +0000 UTC m=+87.537959600" watchObservedRunningTime="2025-05-15 12:53:53.50446799 +0000 UTC m=+87.565626408" May 15 12:53:54.026410 kubelet[2697]: E0515 12:53:54.026366 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:53:54.033154 kubelet[2697]: I0515 12:53:54.032164 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9698ee50-755f-43e4-a451-771820b74a00" path="/var/lib/kubelet/pods/9698ee50-755f-43e4-a451-771820b74a00/volumes" May 15 12:53:54.182310 kubelet[2697]: I0515 12:53:54.182265 2697 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 12:53:54.182587 kubelet[2697]: I0515 12:53:54.182524 2697 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 12:53:56.027389 kubelet[2697]: E0515 12:53:56.026881 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:54:15.502197 containerd[1555]: time="2025-05-15T12:54:15.502144390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"75e0c7f6611c93e130d0f307b997e1ad2bed46602619a3344ae2429a876cafb8\" pid:6137 exited_at:{seconds:1747313655 nanos:500860163}" May 15 12:54:15.505676 kubelet[2697]: E0515 12:54:15.505223 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:54:16.473197 containerd[1555]: time="2025-05-15T12:54:16.473148548Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"955aa1755fb37a9bf98a841c9487dd6ed73a3e0b569420605b2fbf09dd60bb1c\" pid:6161 exited_at:{seconds:1747313656 nanos:472975167}" May 15 12:54:22.524744 systemd[1]: Started sshd@7-172.236.126.108:22-139.178.89.65:54774.service - OpenSSH per-connection server daemon (139.178.89.65:54774). May 15 12:54:22.891294 sshd[6175]: Accepted publickey for core from 139.178.89.65 port 54774 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:22.893537 sshd-session[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:22.903702 systemd-logind[1531]: New session 8 of user core. May 15 12:54:22.910831 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 12:54:23.270731 sshd[6177]: Connection closed by 139.178.89.65 port 54774 May 15 12:54:23.272883 sshd-session[6175]: pam_unix(sshd:session): session closed for user core May 15 12:54:23.282735 systemd-logind[1531]: Session 8 logged out. Waiting for processes to exit. May 15 12:54:23.283369 systemd[1]: sshd@7-172.236.126.108:22-139.178.89.65:54774.service: Deactivated successfully. May 15 12:54:23.291497 systemd[1]: session-8.scope: Deactivated successfully. May 15 12:54:23.301289 systemd-logind[1531]: Removed session 8. May 15 12:54:26.036759 containerd[1555]: time="2025-05-15T12:54:26.036709018Z" level=info msg="StopPodSandbox for \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\"" May 15 12:54:26.037171 containerd[1555]: time="2025-05-15T12:54:26.036864748Z" level=info msg="TearDown network for sandbox \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" successfully" May 15 12:54:26.037171 containerd[1555]: time="2025-05-15T12:54:26.036877259Z" level=info msg="StopPodSandbox for \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" returns successfully" May 15 12:54:26.037530 containerd[1555]: time="2025-05-15T12:54:26.037506701Z" level=info msg="RemovePodSandbox for \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\"" May 15 12:54:26.037615 containerd[1555]: time="2025-05-15T12:54:26.037533391Z" level=info msg="Forcibly stopping sandbox \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\"" May 15 12:54:26.037649 containerd[1555]: time="2025-05-15T12:54:26.037624762Z" level=info msg="TearDown network for sandbox \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" successfully" May 15 12:54:26.039897 containerd[1555]: time="2025-05-15T12:54:26.039874551Z" level=info msg="Ensure that sandbox 49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea in task-service has been cleanup successfully" May 15 12:54:26.043050 containerd[1555]: time="2025-05-15T12:54:26.042977544Z" level=info msg="RemovePodSandbox \"49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea\" returns successfully" May 15 12:54:26.043433 containerd[1555]: time="2025-05-15T12:54:26.043412576Z" level=info msg="StopPodSandbox for \"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\"" May 15 12:54:26.201529 containerd[1555]: 2025-05-15 12:54:26.116 [WARNING][6205] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:54:26.201529 containerd[1555]: 2025-05-15 12:54:26.116 [INFO][6205] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" May 15 12:54:26.201529 containerd[1555]: 2025-05-15 12:54:26.116 [INFO][6205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" iface="eth0" netns="" May 15 12:54:26.201529 containerd[1555]: 2025-05-15 12:54:26.116 [INFO][6205] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" May 15 12:54:26.201529 containerd[1555]: 2025-05-15 12:54:26.116 [INFO][6205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" May 15 12:54:26.201529 containerd[1555]: 2025-05-15 12:54:26.178 [INFO][6212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" HandleID="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Workload="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:54:26.201529 containerd[1555]: 2025-05-15 12:54:26.179 [INFO][6212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:54:26.201529 containerd[1555]: 2025-05-15 12:54:26.179 [INFO][6212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:54:26.201529 containerd[1555]: 2025-05-15 12:54:26.195 [WARNING][6212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" HandleID="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Workload="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:54:26.201529 containerd[1555]: 2025-05-15 12:54:26.195 [INFO][6212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" HandleID="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Workload="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:54:26.201529 containerd[1555]: 2025-05-15 12:54:26.197 [INFO][6212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:54:26.201529 containerd[1555]: 2025-05-15 12:54:26.199 [INFO][6205] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" May 15 12:54:26.202416 containerd[1555]: time="2025-05-15T12:54:26.201592050Z" level=info msg="TearDown network for sandbox \"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\" successfully" May 15 12:54:26.202416 containerd[1555]: time="2025-05-15T12:54:26.201615080Z" level=info msg="StopPodSandbox for \"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\" returns successfully" May 15 12:54:26.202416 containerd[1555]: time="2025-05-15T12:54:26.201977612Z" level=info msg="RemovePodSandbox for \"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\"" May 15 12:54:26.202416 containerd[1555]: time="2025-05-15T12:54:26.202000932Z" level=info msg="Forcibly stopping sandbox \"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\"" May 15 12:54:26.300046 containerd[1555]: 2025-05-15 12:54:26.249 [WARNING][6230] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" WorkloadEndpoint="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:54:26.300046 containerd[1555]: 2025-05-15 12:54:26.249 [INFO][6230] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" May 15 12:54:26.300046 containerd[1555]: 2025-05-15 12:54:26.249 [INFO][6230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" iface="eth0" netns="" May 15 12:54:26.300046 containerd[1555]: 2025-05-15 12:54:26.250 [INFO][6230] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" May 15 12:54:26.300046 containerd[1555]: 2025-05-15 12:54:26.250 [INFO][6230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" May 15 12:54:26.300046 containerd[1555]: 2025-05-15 12:54:26.279 [INFO][6238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" HandleID="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Workload="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:54:26.300046 containerd[1555]: 2025-05-15 12:54:26.279 [INFO][6238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:54:26.300046 containerd[1555]: 2025-05-15 12:54:26.279 [INFO][6238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:54:26.300046 containerd[1555]: 2025-05-15 12:54:26.290 [WARNING][6238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" HandleID="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Workload="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:54:26.300046 containerd[1555]: 2025-05-15 12:54:26.290 [INFO][6238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" HandleID="k8s-pod-network.19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" Workload="172--236--126--108-k8s-calico--kube--controllers--699d85858d--pssr6-eth0" May 15 12:54:26.300046 containerd[1555]: 2025-05-15 12:54:26.294 [INFO][6238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:54:26.300046 containerd[1555]: 2025-05-15 12:54:26.297 [INFO][6230] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303" May 15 12:54:26.300046 containerd[1555]: time="2025-05-15T12:54:26.299976903Z" level=info msg="TearDown network for sandbox \"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\" successfully" May 15 12:54:26.303727 containerd[1555]: time="2025-05-15T12:54:26.303649858Z" level=info msg="Ensure that sandbox 19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303 in task-service has been cleanup successfully" May 15 12:54:26.313083 containerd[1555]: time="2025-05-15T12:54:26.312962668Z" level=info msg="RemovePodSandbox \"19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303\" returns successfully" May 15 12:54:26.314074 containerd[1555]: time="2025-05-15T12:54:26.314031942Z" level=info msg="StopPodSandbox for \"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\"" May 15 12:54:26.398916 containerd[1555]: 2025-05-15 12:54:26.358 [WARNING][6256] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:54:26.398916 containerd[1555]: 2025-05-15 12:54:26.358 [INFO][6256] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" May 15 12:54:26.398916 containerd[1555]: 2025-05-15 12:54:26.358 [INFO][6256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" iface="eth0" netns="" May 15 12:54:26.398916 containerd[1555]: 2025-05-15 12:54:26.358 [INFO][6256] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" May 15 12:54:26.398916 containerd[1555]: 2025-05-15 12:54:26.358 [INFO][6256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" May 15 12:54:26.398916 containerd[1555]: 2025-05-15 12:54:26.386 [INFO][6263] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" HandleID="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:54:26.398916 containerd[1555]: 2025-05-15 12:54:26.386 [INFO][6263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:54:26.398916 containerd[1555]: 2025-05-15 12:54:26.386 [INFO][6263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:54:26.398916 containerd[1555]: 2025-05-15 12:54:26.393 [WARNING][6263] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" HandleID="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:54:26.398916 containerd[1555]: 2025-05-15 12:54:26.393 [INFO][6263] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" HandleID="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:54:26.398916 containerd[1555]: 2025-05-15 12:54:26.395 [INFO][6263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:54:26.398916 containerd[1555]: 2025-05-15 12:54:26.397 [INFO][6256] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" May 15 12:54:26.399804 containerd[1555]: time="2025-05-15T12:54:26.399037969Z" level=info msg="TearDown network for sandbox \"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\" successfully" May 15 12:54:26.399804 containerd[1555]: time="2025-05-15T12:54:26.399167849Z" level=info msg="StopPodSandbox for \"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\" returns successfully" May 15 12:54:26.401075 containerd[1555]: time="2025-05-15T12:54:26.400767526Z" level=info msg="RemovePodSandbox for \"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\"" May 15 12:54:26.401075 containerd[1555]: time="2025-05-15T12:54:26.400797796Z" level=info msg="Forcibly stopping sandbox \"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\"" May 15 12:54:26.552209 containerd[1555]: 2025-05-15 12:54:26.477 [WARNING][6284] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:54:26.552209 containerd[1555]: 2025-05-15 12:54:26.478 [INFO][6284] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" May 15 12:54:26.552209 containerd[1555]: 2025-05-15 12:54:26.478 [INFO][6284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" iface="eth0" netns="" May 15 12:54:26.552209 containerd[1555]: 2025-05-15 12:54:26.478 [INFO][6284] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" May 15 12:54:26.552209 containerd[1555]: 2025-05-15 12:54:26.478 [INFO][6284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" May 15 12:54:26.552209 containerd[1555]: 2025-05-15 12:54:26.530 [INFO][6294] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" HandleID="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:54:26.552209 containerd[1555]: 2025-05-15 12:54:26.531 [INFO][6294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:54:26.552209 containerd[1555]: 2025-05-15 12:54:26.532 [INFO][6294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:54:26.552209 containerd[1555]: 2025-05-15 12:54:26.544 [WARNING][6294] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" HandleID="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:54:26.552209 containerd[1555]: 2025-05-15 12:54:26.544 [INFO][6294] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" HandleID="k8s-pod-network.52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--95bdc-eth0" May 15 12:54:26.552209 containerd[1555]: 2025-05-15 12:54:26.546 [INFO][6294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:54:26.552209 containerd[1555]: 2025-05-15 12:54:26.548 [INFO][6284] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755" May 15 12:54:26.553828 containerd[1555]: time="2025-05-15T12:54:26.552291532Z" level=info msg="TearDown network for sandbox \"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\" successfully" May 15 12:54:26.556925 containerd[1555]: time="2025-05-15T12:54:26.556897052Z" level=info msg="Ensure that sandbox 52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755 in task-service has been cleanup successfully" May 15 12:54:26.560626 containerd[1555]: time="2025-05-15T12:54:26.560342586Z" level=info msg="RemovePodSandbox \"52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755\" returns successfully" May 15 12:54:26.561049 containerd[1555]: time="2025-05-15T12:54:26.561015899Z" level=info msg="StopPodSandbox for \"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\"" May 15 12:54:26.663334 containerd[1555]: 2025-05-15 12:54:26.622 [WARNING][6314] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:54:26.663334 containerd[1555]: 2025-05-15 12:54:26.623 [INFO][6314] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" May 15 12:54:26.663334 containerd[1555]: 2025-05-15 12:54:26.623 [INFO][6314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" iface="eth0" netns="" May 15 12:54:26.663334 containerd[1555]: 2025-05-15 12:54:26.623 [INFO][6314] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" May 15 12:54:26.663334 containerd[1555]: 2025-05-15 12:54:26.623 [INFO][6314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" May 15 12:54:26.663334 containerd[1555]: 2025-05-15 12:54:26.650 [INFO][6321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" HandleID="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:54:26.663334 containerd[1555]: 2025-05-15 12:54:26.650 [INFO][6321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:54:26.663334 containerd[1555]: 2025-05-15 12:54:26.650 [INFO][6321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:54:26.663334 containerd[1555]: 2025-05-15 12:54:26.657 [WARNING][6321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" HandleID="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:54:26.663334 containerd[1555]: 2025-05-15 12:54:26.657 [INFO][6321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" HandleID="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:54:26.663334 containerd[1555]: 2025-05-15 12:54:26.659 [INFO][6321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:54:26.663334 containerd[1555]: 2025-05-15 12:54:26.661 [INFO][6314] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" May 15 12:54:26.663979 containerd[1555]: time="2025-05-15T12:54:26.663835861Z" level=info msg="TearDown network for sandbox \"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\" successfully" May 15 12:54:26.663979 containerd[1555]: time="2025-05-15T12:54:26.663879301Z" level=info msg="StopPodSandbox for \"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\" returns successfully" May 15 12:54:26.664382 containerd[1555]: time="2025-05-15T12:54:26.664360553Z" level=info msg="RemovePodSandbox for \"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\"" May 15 12:54:26.664514 containerd[1555]: time="2025-05-15T12:54:26.664476723Z" level=info msg="Forcibly stopping sandbox \"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\"" May 15 12:54:26.745941 containerd[1555]: 2025-05-15 12:54:26.702 [WARNING][6339] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" WorkloadEndpoint="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:54:26.745941 containerd[1555]: 2025-05-15 12:54:26.702 [INFO][6339] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" May 15 12:54:26.745941 containerd[1555]: 2025-05-15 12:54:26.702 [INFO][6339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" iface="eth0" netns="" May 15 12:54:26.745941 containerd[1555]: 2025-05-15 12:54:26.702 [INFO][6339] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" May 15 12:54:26.745941 containerd[1555]: 2025-05-15 12:54:26.702 [INFO][6339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" May 15 12:54:26.745941 containerd[1555]: 2025-05-15 12:54:26.728 [INFO][6346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" HandleID="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:54:26.745941 containerd[1555]: 2025-05-15 12:54:26.732 [INFO][6346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 12:54:26.745941 containerd[1555]: 2025-05-15 12:54:26.733 [INFO][6346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 12:54:26.745941 containerd[1555]: 2025-05-15 12:54:26.740 [WARNING][6346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" HandleID="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:54:26.745941 containerd[1555]: 2025-05-15 12:54:26.740 [INFO][6346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" HandleID="k8s-pod-network.d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" Workload="172--236--126--108-k8s-calico--apiserver--5d86d7c9bb--64dfc-eth0" May 15 12:54:26.745941 containerd[1555]: 2025-05-15 12:54:26.742 [INFO][6346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 12:54:26.745941 containerd[1555]: 2025-05-15 12:54:26.744 [INFO][6339] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598" May 15 12:54:26.746400 containerd[1555]: time="2025-05-15T12:54:26.745974365Z" level=info msg="TearDown network for sandbox \"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\" successfully" May 15 12:54:26.748388 containerd[1555]: time="2025-05-15T12:54:26.748356105Z" level=info msg="Ensure that sandbox d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598 in task-service has been cleanup successfully" May 15 12:54:26.750296 containerd[1555]: time="2025-05-15T12:54:26.750269784Z" level=info msg="RemovePodSandbox \"d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598\" returns successfully" May 15 12:54:26.750941 containerd[1555]: time="2025-05-15T12:54:26.750897866Z" level=info msg="StopPodSandbox for \"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\"" May 15 12:54:26.751472 containerd[1555]: time="2025-05-15T12:54:26.751434278Z" level=info msg="TearDown network for sandbox \"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\" successfully" May 15 12:54:26.751472 containerd[1555]: time="2025-05-15T12:54:26.751464479Z" level=info msg="StopPodSandbox for \"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\" returns successfully" May 15 12:54:26.751917 containerd[1555]: time="2025-05-15T12:54:26.751878290Z" level=info msg="RemovePodSandbox for \"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\"" May 15 12:54:26.751917 containerd[1555]: time="2025-05-15T12:54:26.751905540Z" level=info msg="Forcibly stopping sandbox \"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\"" May 15 12:54:26.752002 containerd[1555]: time="2025-05-15T12:54:26.751989961Z" level=info msg="TearDown network for sandbox \"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\" successfully" May 15 12:54:26.753803 containerd[1555]: time="2025-05-15T12:54:26.753780728Z" level=info msg="Ensure that sandbox 1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6 in task-service has been cleanup successfully" May 15 12:54:26.755686 containerd[1555]: time="2025-05-15T12:54:26.755660836Z" level=info msg="RemovePodSandbox \"1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6\" returns successfully" May 15 12:54:28.336863 systemd[1]: Started sshd@8-172.236.126.108:22-139.178.89.65:52198.service - OpenSSH per-connection server daemon (139.178.89.65:52198). May 15 12:54:28.689935 sshd[6355]: Accepted publickey for core from 139.178.89.65 port 52198 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:28.692254 sshd-session[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:28.699603 systemd-logind[1531]: New session 9 of user core. May 15 12:54:28.706736 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 12:54:29.011712 sshd[6357]: Connection closed by 139.178.89.65 port 52198 May 15 12:54:29.013599 sshd-session[6355]: pam_unix(sshd:session): session closed for user core May 15 12:54:29.018408 systemd[1]: sshd@8-172.236.126.108:22-139.178.89.65:52198.service: Deactivated successfully. May 15 12:54:29.022262 systemd[1]: session-9.scope: Deactivated successfully. May 15 12:54:29.023746 systemd-logind[1531]: Session 9 logged out. Waiting for processes to exit. May 15 12:54:29.024947 systemd-logind[1531]: Removed session 9. May 15 12:54:34.072691 systemd[1]: Started sshd@9-172.236.126.108:22-139.178.89.65:52212.service - OpenSSH per-connection server daemon (139.178.89.65:52212). May 15 12:54:34.423067 sshd[6374]: Accepted publickey for core from 139.178.89.65 port 52212 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:34.425666 sshd-session[6374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:34.435694 systemd-logind[1531]: New session 10 of user core. May 15 12:54:34.445495 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 12:54:34.724277 sshd[6376]: Connection closed by 139.178.89.65 port 52212 May 15 12:54:34.724995 sshd-session[6374]: pam_unix(sshd:session): session closed for user core May 15 12:54:34.730038 systemd[1]: sshd@9-172.236.126.108:22-139.178.89.65:52212.service: Deactivated successfully. May 15 12:54:34.732809 systemd[1]: session-10.scope: Deactivated successfully. May 15 12:54:34.733822 systemd-logind[1531]: Session 10 logged out. Waiting for processes to exit. May 15 12:54:34.736318 systemd-logind[1531]: Removed session 10. May 15 12:54:34.796502 systemd[1]: Started sshd@10-172.236.126.108:22-139.178.89.65:52228.service - OpenSSH per-connection server daemon (139.178.89.65:52228). May 15 12:54:35.146227 sshd[6389]: Accepted publickey for core from 139.178.89.65 port 52228 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:35.147861 sshd-session[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:35.154100 systemd-logind[1531]: New session 11 of user core. May 15 12:54:35.160974 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 12:54:35.485687 sshd[6391]: Connection closed by 139.178.89.65 port 52228 May 15 12:54:35.486404 sshd-session[6389]: pam_unix(sshd:session): session closed for user core May 15 12:54:35.490887 systemd[1]: sshd@10-172.236.126.108:22-139.178.89.65:52228.service: Deactivated successfully. May 15 12:54:35.493679 systemd[1]: session-11.scope: Deactivated successfully. May 15 12:54:35.495340 systemd-logind[1531]: Session 11 logged out. Waiting for processes to exit. May 15 12:54:35.497376 systemd-logind[1531]: Removed session 11. May 15 12:54:35.546583 systemd[1]: Started sshd@11-172.236.126.108:22-139.178.89.65:52242.service - OpenSSH per-connection server daemon (139.178.89.65:52242). May 15 12:54:35.882830 sshd[6401]: Accepted publickey for core from 139.178.89.65 port 52242 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:35.884534 sshd-session[6401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:35.890618 systemd-logind[1531]: New session 12 of user core. May 15 12:54:35.895731 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 12:54:36.175237 sshd[6403]: Connection closed by 139.178.89.65 port 52242 May 15 12:54:36.175777 sshd-session[6401]: pam_unix(sshd:session): session closed for user core May 15 12:54:36.180744 systemd-logind[1531]: Session 12 logged out. Waiting for processes to exit. May 15 12:54:36.181481 systemd[1]: sshd@11-172.236.126.108:22-139.178.89.65:52242.service: Deactivated successfully. May 15 12:54:36.183651 systemd[1]: session-12.scope: Deactivated successfully. May 15 12:54:36.185422 systemd-logind[1531]: Removed session 12. May 15 12:54:37.025696 kubelet[2697]: E0515 12:54:37.025661 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:54:38.026843 kubelet[2697]: E0515 12:54:38.026466 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:54:41.240632 systemd[1]: Started sshd@12-172.236.126.108:22-139.178.89.65:60594.service - OpenSSH per-connection server daemon (139.178.89.65:60594). May 15 12:54:41.578400 sshd[6419]: Accepted publickey for core from 139.178.89.65 port 60594 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:41.579944 sshd-session[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:41.584633 systemd-logind[1531]: New session 13 of user core. May 15 12:54:41.591692 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 12:54:41.882351 sshd[6421]: Connection closed by 139.178.89.65 port 60594 May 15 12:54:41.882498 sshd-session[6419]: pam_unix(sshd:session): session closed for user core May 15 12:54:41.888041 systemd-logind[1531]: Session 13 logged out. Waiting for processes to exit. May 15 12:54:41.888757 systemd[1]: sshd@12-172.236.126.108:22-139.178.89.65:60594.service: Deactivated successfully. May 15 12:54:41.891169 systemd[1]: session-13.scope: Deactivated successfully. May 15 12:54:41.893449 systemd-logind[1531]: Removed session 13. May 15 12:54:44.828337 containerd[1555]: time="2025-05-15T12:54:44.828296671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"0208f54dbd16429236a8893ce8772997f812b57e6cd142d9ab199f774daa2346\" pid:6446 exited_at:{seconds:1747313684 nanos:828025570}" May 15 12:54:45.491966 containerd[1555]: time="2025-05-15T12:54:45.491923021Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"f58f81f4374271033f768b1dc92180b7f67e26e86b86529629f9a7a290a72bdf\" pid:6468 exited_at:{seconds:1747313685 nanos:491214458}" May 15 12:54:46.467897 containerd[1555]: time="2025-05-15T12:54:46.467850039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"d3f9c9356369235f7ef3cbff33347032213516bc13b29baa09c4359ac60eacfa\" pid:6492 exited_at:{seconds:1747313686 nanos:467427258}" May 15 12:54:46.954767 systemd[1]: Started sshd@13-172.236.126.108:22-139.178.89.65:46780.service - OpenSSH per-connection server daemon (139.178.89.65:46780). May 15 12:54:47.306410 sshd[6502]: Accepted publickey for core from 139.178.89.65 port 46780 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:47.308238 sshd-session[6502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:47.312814 systemd-logind[1531]: New session 14 of user core. May 15 12:54:47.317680 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 12:54:47.620122 sshd[6504]: Connection closed by 139.178.89.65 port 46780 May 15 12:54:47.620778 sshd-session[6502]: pam_unix(sshd:session): session closed for user core May 15 12:54:47.625674 systemd-logind[1531]: Session 14 logged out. Waiting for processes to exit. May 15 12:54:47.625815 systemd[1]: sshd@13-172.236.126.108:22-139.178.89.65:46780.service: Deactivated successfully. May 15 12:54:47.628663 systemd[1]: session-14.scope: Deactivated successfully. May 15 12:54:47.630747 systemd-logind[1531]: Removed session 14. May 15 12:54:52.679692 systemd[1]: Started sshd@14-172.236.126.108:22-139.178.89.65:46784.service - OpenSSH per-connection server daemon (139.178.89.65:46784). May 15 12:54:53.023710 sshd[6517]: Accepted publickey for core from 139.178.89.65 port 46784 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:53.025005 sshd-session[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:53.030837 systemd-logind[1531]: New session 15 of user core. May 15 12:54:53.036707 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 12:54:53.329192 sshd[6519]: Connection closed by 139.178.89.65 port 46784 May 15 12:54:53.329887 sshd-session[6517]: pam_unix(sshd:session): session closed for user core May 15 12:54:53.334277 systemd-logind[1531]: Session 15 logged out. Waiting for processes to exit. May 15 12:54:53.335644 systemd[1]: sshd@14-172.236.126.108:22-139.178.89.65:46784.service: Deactivated successfully. May 15 12:54:53.337794 systemd[1]: session-15.scope: Deactivated successfully. May 15 12:54:53.339663 systemd-logind[1531]: Removed session 15. May 15 12:54:58.395209 systemd[1]: Started sshd@15-172.236.126.108:22-139.178.89.65:60708.service - OpenSSH per-connection server daemon (139.178.89.65:60708). May 15 12:54:58.735921 sshd[6531]: Accepted publickey for core from 139.178.89.65 port 60708 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:54:58.737395 sshd-session[6531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:54:58.742950 systemd-logind[1531]: New session 16 of user core. May 15 12:54:58.745771 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 12:54:59.050965 sshd[6533]: Connection closed by 139.178.89.65 port 60708 May 15 12:54:59.051874 sshd-session[6531]: pam_unix(sshd:session): session closed for user core May 15 12:54:59.056166 systemd[1]: sshd@15-172.236.126.108:22-139.178.89.65:60708.service: Deactivated successfully. May 15 12:54:59.058750 systemd[1]: session-16.scope: Deactivated successfully. May 15 12:54:59.059714 systemd-logind[1531]: Session 16 logged out. Waiting for processes to exit. May 15 12:54:59.062060 systemd-logind[1531]: Removed session 16. May 15 12:55:00.026235 kubelet[2697]: E0515 12:55:00.026163 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:55:01.026275 kubelet[2697]: E0515 12:55:01.026207 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:55:02.025914 kubelet[2697]: E0515 12:55:02.025795 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:55:04.110671 systemd[1]: Started sshd@16-172.236.126.108:22-139.178.89.65:60710.service - OpenSSH per-connection server daemon (139.178.89.65:60710). May 15 12:55:04.451210 sshd[6546]: Accepted publickey for core from 139.178.89.65 port 60710 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:04.452708 sshd-session[6546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:04.457605 systemd-logind[1531]: New session 17 of user core. May 15 12:55:04.466837 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 12:55:04.756890 sshd[6548]: Connection closed by 139.178.89.65 port 60710 May 15 12:55:04.757772 sshd-session[6546]: pam_unix(sshd:session): session closed for user core May 15 12:55:04.762282 systemd[1]: sshd@16-172.236.126.108:22-139.178.89.65:60710.service: Deactivated successfully. May 15 12:55:04.764626 systemd[1]: session-17.scope: Deactivated successfully. May 15 12:55:04.765677 systemd-logind[1531]: Session 17 logged out. Waiting for processes to exit. May 15 12:55:04.767825 systemd-logind[1531]: Removed session 17. May 15 12:55:09.026719 kubelet[2697]: E0515 12:55:09.026678 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:55:09.822524 systemd[1]: Started sshd@17-172.236.126.108:22-139.178.89.65:41124.service - OpenSSH per-connection server daemon (139.178.89.65:41124). May 15 12:55:10.157027 sshd[6566]: Accepted publickey for core from 139.178.89.65 port 41124 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:10.158543 sshd-session[6566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:10.164690 systemd-logind[1531]: New session 18 of user core. May 15 12:55:10.176709 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 12:55:10.459906 sshd[6568]: Connection closed by 139.178.89.65 port 41124 May 15 12:55:10.460661 sshd-session[6566]: pam_unix(sshd:session): session closed for user core May 15 12:55:10.465448 systemd-logind[1531]: Session 18 logged out. Waiting for processes to exit. May 15 12:55:10.465907 systemd[1]: sshd@17-172.236.126.108:22-139.178.89.65:41124.service: Deactivated successfully. May 15 12:55:10.468371 systemd[1]: session-18.scope: Deactivated successfully. May 15 12:55:10.471054 systemd-logind[1531]: Removed session 18. May 15 12:55:15.493226 containerd[1555]: time="2025-05-15T12:55:15.493101561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"2804513c81ef940e768f6b9a14c39e5f7d4e494056eb8f3149410a7e9c5fba39\" pid:6594 exited_at:{seconds:1747313715 nanos:492801650}" May 15 12:55:15.519755 systemd[1]: Started sshd@18-172.236.126.108:22-139.178.89.65:41130.service - OpenSSH per-connection server daemon (139.178.89.65:41130). May 15 12:55:15.858550 sshd[6607]: Accepted publickey for core from 139.178.89.65 port 41130 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:15.860230 sshd-session[6607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:15.869847 systemd-logind[1531]: New session 19 of user core. May 15 12:55:15.880720 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 12:55:16.163484 sshd[6609]: Connection closed by 139.178.89.65 port 41130 May 15 12:55:16.164213 sshd-session[6607]: pam_unix(sshd:session): session closed for user core May 15 12:55:16.170163 systemd[1]: sshd@18-172.236.126.108:22-139.178.89.65:41130.service: Deactivated successfully. May 15 12:55:16.173878 systemd[1]: session-19.scope: Deactivated successfully. May 15 12:55:16.174808 systemd-logind[1531]: Session 19 logged out. Waiting for processes to exit. May 15 12:55:16.177140 systemd-logind[1531]: Removed session 19. May 15 12:55:16.470061 containerd[1555]: time="2025-05-15T12:55:16.469886550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"d29e3352876c44f45358792c6d50a6279654cfc6bc798662860a27c43d0af881\" pid:6631 exited_at:{seconds:1747313716 nanos:469671359}" May 15 12:55:17.025966 kubelet[2697]: E0515 12:55:17.025886 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:55:21.026313 kubelet[2697]: E0515 12:55:21.026278 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:55:21.223938 systemd[1]: Started sshd@19-172.236.126.108:22-139.178.89.65:57964.service - OpenSSH per-connection server daemon (139.178.89.65:57964). May 15 12:55:21.559692 sshd[6646]: Accepted publickey for core from 139.178.89.65 port 57964 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:21.561387 sshd-session[6646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:21.566615 systemd-logind[1531]: New session 20 of user core. May 15 12:55:21.570751 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 12:55:21.852975 sshd[6648]: Connection closed by 139.178.89.65 port 57964 May 15 12:55:21.853530 sshd-session[6646]: pam_unix(sshd:session): session closed for user core May 15 12:55:21.858000 systemd[1]: sshd@19-172.236.126.108:22-139.178.89.65:57964.service: Deactivated successfully. May 15 12:55:21.860282 systemd[1]: session-20.scope: Deactivated successfully. May 15 12:55:21.861493 systemd-logind[1531]: Session 20 logged out. Waiting for processes to exit. May 15 12:55:21.863342 systemd-logind[1531]: Removed session 20. May 15 12:55:26.917872 systemd[1]: Started sshd@20-172.236.126.108:22-139.178.89.65:48906.service - OpenSSH per-connection server daemon (139.178.89.65:48906). May 15 12:55:27.260788 sshd[6673]: Accepted publickey for core from 139.178.89.65 port 48906 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:27.263330 sshd-session[6673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:27.274014 systemd-logind[1531]: New session 21 of user core. May 15 12:55:27.279749 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 12:55:27.599583 sshd[6675]: Connection closed by 139.178.89.65 port 48906 May 15 12:55:27.600177 sshd-session[6673]: pam_unix(sshd:session): session closed for user core May 15 12:55:27.606303 systemd[1]: sshd@20-172.236.126.108:22-139.178.89.65:48906.service: Deactivated successfully. May 15 12:55:27.608932 systemd[1]: session-21.scope: Deactivated successfully. May 15 12:55:27.610734 systemd-logind[1531]: Session 21 logged out. Waiting for processes to exit. May 15 12:55:27.613065 systemd-logind[1531]: Removed session 21. May 15 12:55:32.662520 systemd[1]: Started sshd@21-172.236.126.108:22-139.178.89.65:48916.service - OpenSSH per-connection server daemon (139.178.89.65:48916). May 15 12:55:33.008143 sshd[6689]: Accepted publickey for core from 139.178.89.65 port 48916 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:33.009788 sshd-session[6689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:33.015019 systemd-logind[1531]: New session 22 of user core. May 15 12:55:33.025759 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 12:55:33.318641 sshd[6691]: Connection closed by 139.178.89.65 port 48916 May 15 12:55:33.319754 sshd-session[6689]: pam_unix(sshd:session): session closed for user core May 15 12:55:33.325141 systemd[1]: sshd@21-172.236.126.108:22-139.178.89.65:48916.service: Deactivated successfully. May 15 12:55:33.327780 systemd[1]: session-22.scope: Deactivated successfully. May 15 12:55:33.328777 systemd-logind[1531]: Session 22 logged out. Waiting for processes to exit. May 15 12:55:33.331212 systemd-logind[1531]: Removed session 22. May 15 12:55:38.386020 systemd[1]: Started sshd@22-172.236.126.108:22-139.178.89.65:46160.service - OpenSSH per-connection server daemon (139.178.89.65:46160). May 15 12:55:38.737101 sshd[6703]: Accepted publickey for core from 139.178.89.65 port 46160 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:38.738707 sshd-session[6703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:38.743440 systemd-logind[1531]: New session 23 of user core. May 15 12:55:38.749706 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 12:55:39.047954 sshd[6705]: Connection closed by 139.178.89.65 port 46160 May 15 12:55:39.048717 sshd-session[6703]: pam_unix(sshd:session): session closed for user core May 15 12:55:39.053788 systemd-logind[1531]: Session 23 logged out. Waiting for processes to exit. May 15 12:55:39.055020 systemd[1]: sshd@22-172.236.126.108:22-139.178.89.65:46160.service: Deactivated successfully. May 15 12:55:39.057850 systemd[1]: session-23.scope: Deactivated successfully. May 15 12:55:39.060972 systemd-logind[1531]: Removed session 23. May 15 12:55:40.026081 kubelet[2697]: E0515 12:55:40.025992 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:55:44.114783 systemd[1]: Started sshd@23-172.236.126.108:22-139.178.89.65:46170.service - OpenSSH per-connection server daemon (139.178.89.65:46170). May 15 12:55:44.469240 sshd[6716]: Accepted publickey for core from 139.178.89.65 port 46170 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:44.470927 sshd-session[6716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:44.476465 systemd-logind[1531]: New session 24 of user core. May 15 12:55:44.482712 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 12:55:44.784631 sshd[6718]: Connection closed by 139.178.89.65 port 46170 May 15 12:55:44.785263 sshd-session[6716]: pam_unix(sshd:session): session closed for user core May 15 12:55:44.791615 systemd-logind[1531]: Session 24 logged out. Waiting for processes to exit. May 15 12:55:44.792379 systemd[1]: sshd@23-172.236.126.108:22-139.178.89.65:46170.service: Deactivated successfully. May 15 12:55:44.796010 systemd[1]: session-24.scope: Deactivated successfully. May 15 12:55:44.799731 systemd-logind[1531]: Removed session 24. May 15 12:55:44.833899 containerd[1555]: time="2025-05-15T12:55:44.833829513Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"c5aa5e8224ce9749f4ad4d40a80f3955a7e51fea2d0fa429dc5f651b422ccf1a\" pid:6741 exited_at:{seconds:1747313744 nanos:833500428}" May 15 12:55:45.026412 kubelet[2697]: E0515 12:55:45.026370 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:55:45.494203 containerd[1555]: time="2025-05-15T12:55:45.494120804Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"db2fa405f17bb6c8b5a7b96caad29dcc614c0eda68eadd63672cded7be76f8a6\" pid:6764 exited_at:{seconds:1747313745 nanos:493720769}" May 15 12:55:46.466488 containerd[1555]: time="2025-05-15T12:55:46.466419944Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"f93034051312884a1d037568436f351930113d3d66dc90a7ca9ac1f9143295b3\" pid:6791 exited_at:{seconds:1747313746 nanos:466212292}" May 15 12:55:49.847736 systemd[1]: Started sshd@24-172.236.126.108:22-139.178.89.65:55268.service - OpenSSH per-connection server daemon (139.178.89.65:55268). May 15 12:55:50.199911 sshd[6801]: Accepted publickey for core from 139.178.89.65 port 55268 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:50.202026 sshd-session[6801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:50.208541 systemd-logind[1531]: New session 25 of user core. May 15 12:55:50.217700 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 12:55:50.506256 sshd[6803]: Connection closed by 139.178.89.65 port 55268 May 15 12:55:50.507550 sshd-session[6801]: pam_unix(sshd:session): session closed for user core May 15 12:55:50.512009 systemd-logind[1531]: Session 25 logged out. Waiting for processes to exit. May 15 12:55:50.512914 systemd[1]: sshd@24-172.236.126.108:22-139.178.89.65:55268.service: Deactivated successfully. May 15 12:55:50.515027 systemd[1]: session-25.scope: Deactivated successfully. May 15 12:55:50.516453 systemd-logind[1531]: Removed session 25. May 15 12:55:55.569051 systemd[1]: Started sshd@25-172.236.126.108:22-139.178.89.65:55276.service - OpenSSH per-connection server daemon (139.178.89.65:55276). May 15 12:55:55.907480 sshd[6815]: Accepted publickey for core from 139.178.89.65 port 55276 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:55:55.910208 sshd-session[6815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:55:55.920851 systemd-logind[1531]: New session 26 of user core. May 15 12:55:55.927773 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 12:55:56.229080 sshd[6817]: Connection closed by 139.178.89.65 port 55276 May 15 12:55:56.229865 sshd-session[6815]: pam_unix(sshd:session): session closed for user core May 15 12:55:56.235228 systemd[1]: sshd@25-172.236.126.108:22-139.178.89.65:55276.service: Deactivated successfully. May 15 12:55:56.238353 systemd[1]: session-26.scope: Deactivated successfully. May 15 12:55:56.240013 systemd-logind[1531]: Session 26 logged out. Waiting for processes to exit. May 15 12:55:56.242230 systemd-logind[1531]: Removed session 26. May 15 12:56:01.301779 systemd[1]: Started sshd@26-172.236.126.108:22-139.178.89.65:45620.service - OpenSSH per-connection server daemon (139.178.89.65:45620). May 15 12:56:01.660324 sshd[6828]: Accepted publickey for core from 139.178.89.65 port 45620 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:56:01.662096 sshd-session[6828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:56:01.667734 systemd-logind[1531]: New session 27 of user core. May 15 12:56:01.673692 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 12:56:01.966882 sshd[6830]: Connection closed by 139.178.89.65 port 45620 May 15 12:56:01.968629 sshd-session[6828]: pam_unix(sshd:session): session closed for user core May 15 12:56:01.972873 systemd[1]: sshd@26-172.236.126.108:22-139.178.89.65:45620.service: Deactivated successfully. May 15 12:56:01.975197 systemd[1]: session-27.scope: Deactivated successfully. May 15 12:56:01.976452 systemd-logind[1531]: Session 27 logged out. Waiting for processes to exit. May 15 12:56:01.977911 systemd-logind[1531]: Removed session 27. May 15 12:56:07.036511 systemd[1]: Started sshd@27-172.236.126.108:22-194.0.234.16:15410.service - OpenSSH per-connection server daemon (194.0.234.16:15410). May 15 12:56:07.040320 systemd[1]: Started sshd@28-172.236.126.108:22-139.178.89.65:45630.service - OpenSSH per-connection server daemon (139.178.89.65:45630). May 15 12:56:07.380612 sshd[6846]: Accepted publickey for core from 139.178.89.65 port 45630 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:56:07.382216 sshd-session[6846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:56:07.388292 systemd-logind[1531]: New session 28 of user core. May 15 12:56:07.396878 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 12:56:07.686774 sshd[6848]: Connection closed by 139.178.89.65 port 45630 May 15 12:56:07.687609 sshd-session[6846]: pam_unix(sshd:session): session closed for user core May 15 12:56:07.693546 systemd[1]: sshd@28-172.236.126.108:22-139.178.89.65:45630.service: Deactivated successfully. May 15 12:56:07.697307 systemd[1]: session-28.scope: Deactivated successfully. May 15 12:56:07.698444 systemd-logind[1531]: Session 28 logged out. Waiting for processes to exit. May 15 12:56:07.700465 systemd-logind[1531]: Removed session 28. May 15 12:56:08.321797 sshd[6845]: Invalid user admin from 194.0.234.16 port 15410 May 15 12:56:08.548608 sshd[6845]: Connection closed by invalid user admin 194.0.234.16 port 15410 [preauth] May 15 12:56:08.553118 systemd[1]: sshd@27-172.236.126.108:22-194.0.234.16:15410.service: Deactivated successfully. May 15 12:56:12.026118 kubelet[2697]: E0515 12:56:12.026067 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:56:12.752344 systemd[1]: Started sshd@29-172.236.126.108:22-139.178.89.65:45644.service - OpenSSH per-connection server daemon (139.178.89.65:45644). May 15 12:56:13.026169 kubelet[2697]: E0515 12:56:13.026014 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:56:13.105740 sshd[6863]: Accepted publickey for core from 139.178.89.65 port 45644 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:56:13.107760 sshd-session[6863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:56:13.112821 systemd-logind[1531]: New session 29 of user core. May 15 12:56:13.118693 systemd[1]: Started session-29.scope - Session 29 of User core. May 15 12:56:13.403450 sshd[6865]: Connection closed by 139.178.89.65 port 45644 May 15 12:56:13.404755 sshd-session[6863]: pam_unix(sshd:session): session closed for user core May 15 12:56:13.409132 systemd-logind[1531]: Session 29 logged out. Waiting for processes to exit. May 15 12:56:13.409441 systemd[1]: sshd@29-172.236.126.108:22-139.178.89.65:45644.service: Deactivated successfully. May 15 12:56:13.416257 systemd[1]: session-29.scope: Deactivated successfully. May 15 12:56:13.418682 systemd-logind[1531]: Removed session 29. May 15 12:56:15.501158 containerd[1555]: time="2025-05-15T12:56:15.501120499Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"f83a903028ab144f0921c4719a27ef0584c372ab57df6ace76d528e47940f373\" pid:6888 exited_at:{seconds:1747313775 nanos:500483273}" May 15 12:56:16.467490 containerd[1555]: time="2025-05-15T12:56:16.467433221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"9fd443555b89c869e124de531c307b76d0d69db1add7083d5e68606dab9968b9\" pid:6912 exited_at:{seconds:1747313776 nanos:467294500}" May 15 12:56:18.026595 kubelet[2697]: E0515 12:56:18.026304 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:56:18.466749 systemd[1]: Started sshd@30-172.236.126.108:22-139.178.89.65:39482.service - OpenSSH per-connection server daemon (139.178.89.65:39482). May 15 12:56:18.803787 sshd[6922]: Accepted publickey for core from 139.178.89.65 port 39482 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:56:18.805645 sshd-session[6922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:56:18.812943 systemd-logind[1531]: New session 30 of user core. May 15 12:56:18.816706 systemd[1]: Started session-30.scope - Session 30 of User core. May 15 12:56:19.027257 kubelet[2697]: E0515 12:56:19.026455 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:56:19.108721 sshd[6925]: Connection closed by 139.178.89.65 port 39482 May 15 12:56:19.108990 sshd-session[6922]: pam_unix(sshd:session): session closed for user core May 15 12:56:19.114919 systemd[1]: sshd@30-172.236.126.108:22-139.178.89.65:39482.service: Deactivated successfully. May 15 12:56:19.118216 systemd[1]: session-30.scope: Deactivated successfully. May 15 12:56:19.119240 systemd-logind[1531]: Session 30 logged out. Waiting for processes to exit. May 15 12:56:19.121039 systemd-logind[1531]: Removed session 30. May 15 12:56:24.171484 systemd[1]: Started sshd@31-172.236.126.108:22-139.178.89.65:39494.service - OpenSSH per-connection server daemon (139.178.89.65:39494). May 15 12:56:24.511954 sshd[6938]: Accepted publickey for core from 139.178.89.65 port 39494 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:56:24.513505 sshd-session[6938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:56:24.519069 systemd-logind[1531]: New session 31 of user core. May 15 12:56:24.527742 systemd[1]: Started session-31.scope - Session 31 of User core. May 15 12:56:24.821712 sshd[6940]: Connection closed by 139.178.89.65 port 39494 May 15 12:56:24.822846 sshd-session[6938]: pam_unix(sshd:session): session closed for user core May 15 12:56:24.827239 systemd[1]: sshd@31-172.236.126.108:22-139.178.89.65:39494.service: Deactivated successfully. May 15 12:56:24.831393 systemd[1]: session-31.scope: Deactivated successfully. May 15 12:56:24.832932 systemd-logind[1531]: Session 31 logged out. Waiting for processes to exit. May 15 12:56:24.834445 systemd-logind[1531]: Removed session 31. May 15 12:56:29.891997 systemd[1]: Started sshd@32-172.236.126.108:22-139.178.89.65:56744.service - OpenSSH per-connection server daemon (139.178.89.65:56744). May 15 12:56:30.027360 kubelet[2697]: E0515 12:56:30.027306 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:56:30.247696 sshd[6960]: Accepted publickey for core from 139.178.89.65 port 56744 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:56:30.248902 sshd-session[6960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:56:30.254986 systemd-logind[1531]: New session 32 of user core. May 15 12:56:30.262709 systemd[1]: Started session-32.scope - Session 32 of User core. May 15 12:56:30.557136 sshd[6963]: Connection closed by 139.178.89.65 port 56744 May 15 12:56:30.558130 sshd-session[6960]: pam_unix(sshd:session): session closed for user core May 15 12:56:30.562369 systemd[1]: sshd@32-172.236.126.108:22-139.178.89.65:56744.service: Deactivated successfully. May 15 12:56:30.565013 systemd[1]: session-32.scope: Deactivated successfully. May 15 12:56:30.567546 systemd-logind[1531]: Session 32 logged out. Waiting for processes to exit. May 15 12:56:30.569743 systemd-logind[1531]: Removed session 32. May 15 12:56:35.620011 systemd[1]: Started sshd@33-172.236.126.108:22-139.178.89.65:56750.service - OpenSSH per-connection server daemon (139.178.89.65:56750). May 15 12:56:35.957697 sshd[6979]: Accepted publickey for core from 139.178.89.65 port 56750 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:56:35.959292 sshd-session[6979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:56:35.964177 systemd-logind[1531]: New session 33 of user core. May 15 12:56:35.969688 systemd[1]: Started session-33.scope - Session 33 of User core. May 15 12:56:36.270072 sshd[6981]: Connection closed by 139.178.89.65 port 56750 May 15 12:56:36.271748 sshd-session[6979]: pam_unix(sshd:session): session closed for user core May 15 12:56:36.276248 systemd-logind[1531]: Session 33 logged out. Waiting for processes to exit. May 15 12:56:36.276436 systemd[1]: sshd@33-172.236.126.108:22-139.178.89.65:56750.service: Deactivated successfully. May 15 12:56:36.279363 systemd[1]: session-33.scope: Deactivated successfully. May 15 12:56:36.281335 systemd-logind[1531]: Removed session 33. May 15 12:56:41.332631 systemd[1]: Started sshd@34-172.236.126.108:22-139.178.89.65:43456.service - OpenSSH per-connection server daemon (139.178.89.65:43456). May 15 12:56:41.678957 sshd[6993]: Accepted publickey for core from 139.178.89.65 port 43456 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:56:41.680819 sshd-session[6993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:56:41.686098 systemd-logind[1531]: New session 34 of user core. May 15 12:56:41.694689 systemd[1]: Started session-34.scope - Session 34 of User core. May 15 12:56:41.980126 sshd[6995]: Connection closed by 139.178.89.65 port 43456 May 15 12:56:41.980796 sshd-session[6993]: pam_unix(sshd:session): session closed for user core May 15 12:56:41.987057 systemd[1]: sshd@34-172.236.126.108:22-139.178.89.65:43456.service: Deactivated successfully. May 15 12:56:41.989965 systemd[1]: session-34.scope: Deactivated successfully. May 15 12:56:41.990886 systemd-logind[1531]: Session 34 logged out. Waiting for processes to exit. May 15 12:56:41.992488 systemd-logind[1531]: Removed session 34. May 15 12:56:44.026784 kubelet[2697]: E0515 12:56:44.025964 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:56:44.826539 containerd[1555]: time="2025-05-15T12:56:44.826482593Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"40c140867928b633e2e5f27c7ec1ea97769bf8ef64095173a0033dde1f1b9632\" pid:7018 exited_at:{seconds:1747313804 nanos:826199801}" May 15 12:56:45.511351 containerd[1555]: time="2025-05-15T12:56:45.511144099Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"8ae65bb9a1a5350782f3c93945dee814e170461905e3feef5d1a3b247e5714f1\" pid:7040 exited_at:{seconds:1747313805 nanos:510482365}" May 15 12:56:46.470515 containerd[1555]: time="2025-05-15T12:56:46.470461447Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"0a84b92ab3bea63187bf79d6d17b32f00bdb5b65f5a4ccc982042049f91d4ef6\" pid:7063 exited_at:{seconds:1747313806 nanos:470071785}" May 15 12:56:47.039531 systemd[1]: Started sshd@35-172.236.126.108:22-139.178.89.65:42958.service - OpenSSH per-connection server daemon (139.178.89.65:42958). May 15 12:56:47.376585 sshd[7074]: Accepted publickey for core from 139.178.89.65 port 42958 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:56:47.378010 sshd-session[7074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:56:47.384308 systemd-logind[1531]: New session 35 of user core. May 15 12:56:47.389802 systemd[1]: Started session-35.scope - Session 35 of User core. May 15 12:56:47.674521 sshd[7076]: Connection closed by 139.178.89.65 port 42958 May 15 12:56:47.674938 sshd-session[7074]: pam_unix(sshd:session): session closed for user core May 15 12:56:47.679935 systemd[1]: sshd@35-172.236.126.108:22-139.178.89.65:42958.service: Deactivated successfully. May 15 12:56:47.682974 systemd[1]: session-35.scope: Deactivated successfully. May 15 12:56:47.684127 systemd-logind[1531]: Session 35 logged out. Waiting for processes to exit. May 15 12:56:47.686120 systemd-logind[1531]: Removed session 35. May 15 12:56:52.736162 systemd[1]: Started sshd@36-172.236.126.108:22-139.178.89.65:42972.service - OpenSSH per-connection server daemon (139.178.89.65:42972). May 15 12:56:53.080212 sshd[7089]: Accepted publickey for core from 139.178.89.65 port 42972 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:56:53.081877 sshd-session[7089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:56:53.087103 systemd-logind[1531]: New session 36 of user core. May 15 12:56:53.092681 systemd[1]: Started session-36.scope - Session 36 of User core. May 15 12:56:53.382416 sshd[7091]: Connection closed by 139.178.89.65 port 42972 May 15 12:56:53.383058 sshd-session[7089]: pam_unix(sshd:session): session closed for user core May 15 12:56:53.388530 systemd-logind[1531]: Session 36 logged out. Waiting for processes to exit. May 15 12:56:53.389101 systemd[1]: sshd@36-172.236.126.108:22-139.178.89.65:42972.service: Deactivated successfully. May 15 12:56:53.391850 systemd[1]: session-36.scope: Deactivated successfully. May 15 12:56:53.393789 systemd-logind[1531]: Removed session 36. May 15 12:56:58.451383 systemd[1]: Started sshd@37-172.236.126.108:22-139.178.89.65:41874.service - OpenSSH per-connection server daemon (139.178.89.65:41874). May 15 12:56:58.805174 sshd[7110]: Accepted publickey for core from 139.178.89.65 port 41874 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:56:58.807079 sshd-session[7110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:56:58.813394 systemd-logind[1531]: New session 37 of user core. May 15 12:56:58.818674 systemd[1]: Started session-37.scope - Session 37 of User core. May 15 12:56:59.114981 sshd[7112]: Connection closed by 139.178.89.65 port 41874 May 15 12:56:59.115851 sshd-session[7110]: pam_unix(sshd:session): session closed for user core May 15 12:56:59.120819 systemd[1]: sshd@37-172.236.126.108:22-139.178.89.65:41874.service: Deactivated successfully. May 15 12:56:59.123844 systemd[1]: session-37.scope: Deactivated successfully. May 15 12:56:59.125234 systemd-logind[1531]: Session 37 logged out. Waiting for processes to exit. May 15 12:56:59.127418 systemd-logind[1531]: Removed session 37. May 15 12:57:04.187263 systemd[1]: Started sshd@38-172.236.126.108:22-139.178.89.65:41890.service - OpenSSH per-connection server daemon (139.178.89.65:41890). May 15 12:57:04.530709 sshd[7137]: Accepted publickey for core from 139.178.89.65 port 41890 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:04.532248 sshd-session[7137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:04.538286 systemd-logind[1531]: New session 38 of user core. May 15 12:57:04.548722 systemd[1]: Started session-38.scope - Session 38 of User core. May 15 12:57:04.836655 sshd[7139]: Connection closed by 139.178.89.65 port 41890 May 15 12:57:04.837379 sshd-session[7137]: pam_unix(sshd:session): session closed for user core May 15 12:57:04.842438 systemd[1]: sshd@38-172.236.126.108:22-139.178.89.65:41890.service: Deactivated successfully. May 15 12:57:04.845738 systemd[1]: session-38.scope: Deactivated successfully. May 15 12:57:04.847089 systemd-logind[1531]: Session 38 logged out. Waiting for processes to exit. May 15 12:57:04.848659 systemd-logind[1531]: Removed session 38. May 15 12:57:09.026280 kubelet[2697]: E0515 12:57:09.026214 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:57:09.904951 systemd[1]: Started sshd@39-172.236.126.108:22-139.178.89.65:57638.service - OpenSSH per-connection server daemon (139.178.89.65:57638). May 15 12:57:10.255001 sshd[7151]: Accepted publickey for core from 139.178.89.65 port 57638 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:10.256737 sshd-session[7151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:10.262301 systemd-logind[1531]: New session 39 of user core. May 15 12:57:10.271699 systemd[1]: Started session-39.scope - Session 39 of User core. May 15 12:57:10.568475 sshd[7153]: Connection closed by 139.178.89.65 port 57638 May 15 12:57:10.569443 sshd-session[7151]: pam_unix(sshd:session): session closed for user core May 15 12:57:10.573950 systemd-logind[1531]: Session 39 logged out. Waiting for processes to exit. May 15 12:57:10.575274 systemd[1]: sshd@39-172.236.126.108:22-139.178.89.65:57638.service: Deactivated successfully. May 15 12:57:10.577931 systemd[1]: session-39.scope: Deactivated successfully. May 15 12:57:10.580631 systemd-logind[1531]: Removed session 39. May 15 12:57:12.026605 kubelet[2697]: E0515 12:57:12.026010 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:57:15.514822 containerd[1555]: time="2025-05-15T12:57:15.514760151Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"077d3be17228f4237858a1952acb751072743f1227e2f15632069d80893a09e0\" pid:7176 exited_at:{seconds:1747313835 nanos:514105448}" May 15 12:57:15.629071 systemd[1]: Started sshd@40-172.236.126.108:22-139.178.89.65:57652.service - OpenSSH per-connection server daemon (139.178.89.65:57652). May 15 12:57:15.969477 sshd[7191]: Accepted publickey for core from 139.178.89.65 port 57652 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:15.971225 sshd-session[7191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:15.977414 systemd-logind[1531]: New session 40 of user core. May 15 12:57:15.982924 systemd[1]: Started session-40.scope - Session 40 of User core. May 15 12:57:16.026779 kubelet[2697]: E0515 12:57:16.026609 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:57:16.298997 sshd[7193]: Connection closed by 139.178.89.65 port 57652 May 15 12:57:16.299944 sshd-session[7191]: pam_unix(sshd:session): session closed for user core May 15 12:57:16.304980 systemd-logind[1531]: Session 40 logged out. Waiting for processes to exit. May 15 12:57:16.305912 systemd[1]: sshd@40-172.236.126.108:22-139.178.89.65:57652.service: Deactivated successfully. May 15 12:57:16.308416 systemd[1]: session-40.scope: Deactivated successfully. May 15 12:57:16.311172 systemd-logind[1531]: Removed session 40. May 15 12:57:16.470813 containerd[1555]: time="2025-05-15T12:57:16.470758045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"b67066df6158b065468bb4a1e30d187164dc8ff0aeb3e3a63d984ec6a3d23ea0\" pid:7216 exited_at:{seconds:1747313836 nanos:470212782}" May 15 12:57:21.242768 containerd[1555]: time="2025-05-15T12:57:21.242651619Z" level=warning msg="container event discarded" container=6fb01c47dc64b56388229df24165d28e6c44c00299cc8bbc918ce7968b1f1eb3 type=CONTAINER_CREATED_EVENT May 15 12:57:21.254012 containerd[1555]: time="2025-05-15T12:57:21.253933077Z" level=warning msg="container event discarded" container=6fb01c47dc64b56388229df24165d28e6c44c00299cc8bbc918ce7968b1f1eb3 type=CONTAINER_STARTED_EVENT May 15 12:57:21.274596 containerd[1555]: time="2025-05-15T12:57:21.274469032Z" level=warning msg="container event discarded" container=487eb9b7d44be421c60e6be29df770563f25f4e6ef6f971bb899c92ecdb1b4b8 type=CONTAINER_CREATED_EVENT May 15 12:57:21.312857 containerd[1555]: time="2025-05-15T12:57:21.312783629Z" level=warning msg="container event discarded" container=cc13b50097af63b02730e52ad6ccc874f08fb5280eeaf76418fc2dd8c639d7cd type=CONTAINER_CREATED_EVENT May 15 12:57:21.312857 containerd[1555]: time="2025-05-15T12:57:21.312824559Z" level=warning msg="container event discarded" container=cc13b50097af63b02730e52ad6ccc874f08fb5280eeaf76418fc2dd8c639d7cd type=CONTAINER_STARTED_EVENT May 15 12:57:21.348078 containerd[1555]: time="2025-05-15T12:57:21.348001139Z" level=warning msg="container event discarded" container=a2da1a08c8e3212e5fcfb7ae7449d6cc8e9f0a437feaa60388c4b66bac165f00 type=CONTAINER_CREATED_EVENT May 15 12:57:21.348078 containerd[1555]: time="2025-05-15T12:57:21.348029389Z" level=warning msg="container event discarded" container=a2da1a08c8e3212e5fcfb7ae7449d6cc8e9f0a437feaa60388c4b66bac165f00 type=CONTAINER_STARTED_EVENT May 15 12:57:21.348078 containerd[1555]: time="2025-05-15T12:57:21.348038880Z" level=warning msg="container event discarded" container=3eb4081c698f2d4e871988f397984d1f79bb2f4f838cb308ee07011ac62a9468 type=CONTAINER_CREATED_EVENT May 15 12:57:21.363546 systemd[1]: Started sshd@41-172.236.126.108:22-139.178.89.65:53344.service - OpenSSH per-connection server daemon (139.178.89.65:53344). May 15 12:57:21.376344 containerd[1555]: time="2025-05-15T12:57:21.376270534Z" level=warning msg="container event discarded" container=5c4c0d898275a2dcde1b68527db6235ae40e99050b2797edda3b2d683186dba6 type=CONTAINER_CREATED_EVENT May 15 12:57:21.423748 containerd[1555]: time="2025-05-15T12:57:21.423637247Z" level=warning msg="container event discarded" container=487eb9b7d44be421c60e6be29df770563f25f4e6ef6f971bb899c92ecdb1b4b8 type=CONTAINER_STARTED_EVENT May 15 12:57:21.497146 containerd[1555]: time="2025-05-15T12:57:21.496940053Z" level=warning msg="container event discarded" container=3eb4081c698f2d4e871988f397984d1f79bb2f4f838cb308ee07011ac62a9468 type=CONTAINER_STARTED_EVENT May 15 12:57:21.577473 containerd[1555]: time="2025-05-15T12:57:21.577392505Z" level=warning msg="container event discarded" container=5c4c0d898275a2dcde1b68527db6235ae40e99050b2797edda3b2d683186dba6 type=CONTAINER_STARTED_EVENT May 15 12:57:21.721825 sshd[7227]: Accepted publickey for core from 139.178.89.65 port 53344 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:21.724167 sshd-session[7227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:21.730978 systemd-logind[1531]: New session 41 of user core. May 15 12:57:21.734705 systemd[1]: Started session-41.scope - Session 41 of User core. May 15 12:57:22.026401 kubelet[2697]: E0515 12:57:22.026312 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:57:22.040299 sshd[7229]: Connection closed by 139.178.89.65 port 53344 May 15 12:57:22.041233 sshd-session[7227]: pam_unix(sshd:session): session closed for user core May 15 12:57:22.045436 systemd-logind[1531]: Session 41 logged out. Waiting for processes to exit. May 15 12:57:22.046358 systemd[1]: sshd@41-172.236.126.108:22-139.178.89.65:53344.service: Deactivated successfully. May 15 12:57:22.048889 systemd[1]: session-41.scope: Deactivated successfully. May 15 12:57:22.051164 systemd-logind[1531]: Removed session 41. May 15 12:57:26.026576 kubelet[2697]: E0515 12:57:26.026528 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:57:27.101409 systemd[1]: Started sshd@42-172.236.126.108:22-139.178.89.65:54150.service - OpenSSH per-connection server daemon (139.178.89.65:54150). May 15 12:57:27.440252 sshd[7243]: Accepted publickey for core from 139.178.89.65 port 54150 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:27.441767 sshd-session[7243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:27.447929 systemd-logind[1531]: New session 42 of user core. May 15 12:57:27.455685 systemd[1]: Started session-42.scope - Session 42 of User core. May 15 12:57:27.744446 sshd[7245]: Connection closed by 139.178.89.65 port 54150 May 15 12:57:27.745304 sshd-session[7243]: pam_unix(sshd:session): session closed for user core May 15 12:57:27.749871 systemd[1]: sshd@42-172.236.126.108:22-139.178.89.65:54150.service: Deactivated successfully. May 15 12:57:27.750392 systemd-logind[1531]: Session 42 logged out. Waiting for processes to exit. May 15 12:57:27.752896 systemd[1]: session-42.scope: Deactivated successfully. May 15 12:57:27.754591 systemd-logind[1531]: Removed session 42. May 15 12:57:31.711124 containerd[1555]: time="2025-05-15T12:57:31.711034539Z" level=warning msg="container event discarded" container=7a30eda49d529c26ce5bbab21a081e9c2a1201427d445228e7a949ee8ceef023 type=CONTAINER_CREATED_EVENT May 15 12:57:31.711124 containerd[1555]: time="2025-05-15T12:57:31.711114019Z" level=warning msg="container event discarded" container=7a30eda49d529c26ce5bbab21a081e9c2a1201427d445228e7a949ee8ceef023 type=CONTAINER_STARTED_EVENT May 15 12:57:31.732566 containerd[1555]: time="2025-05-15T12:57:31.732506762Z" level=warning msg="container event discarded" container=aaf597b9d9a3e56c36504b4c1c637a4819aa348524016b6ad58b827462212ef5 type=CONTAINER_CREATED_EVENT May 15 12:57:31.806867 containerd[1555]: time="2025-05-15T12:57:31.806801259Z" level=warning msg="container event discarded" container=aaf597b9d9a3e56c36504b4c1c637a4819aa348524016b6ad58b827462212ef5 type=CONTAINER_STARTED_EVENT May 15 12:57:32.026451 kubelet[2697]: E0515 12:57:32.025885 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:57:32.811909 systemd[1]: Started sshd@43-172.236.126.108:22-139.178.89.65:54160.service - OpenSSH per-connection server daemon (139.178.89.65:54160). May 15 12:57:33.068756 containerd[1555]: time="2025-05-15T12:57:33.068691701Z" level=warning msg="container event discarded" container=079e6915c7372a0d8b23fad1f2d6a1e531607bab06b17c9ff50f5696e642edd9 type=CONTAINER_CREATED_EVENT May 15 12:57:33.068756 containerd[1555]: time="2025-05-15T12:57:33.068755491Z" level=warning msg="container event discarded" container=079e6915c7372a0d8b23fad1f2d6a1e531607bab06b17c9ff50f5696e642edd9 type=CONTAINER_STARTED_EVENT May 15 12:57:33.173142 sshd[7260]: Accepted publickey for core from 139.178.89.65 port 54160 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:33.174610 sshd-session[7260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:33.180710 systemd-logind[1531]: New session 43 of user core. May 15 12:57:33.185762 systemd[1]: Started session-43.scope - Session 43 of User core. May 15 12:57:33.525291 sshd[7262]: Connection closed by 139.178.89.65 port 54160 May 15 12:57:33.526079 sshd-session[7260]: pam_unix(sshd:session): session closed for user core May 15 12:57:33.533258 systemd[1]: sshd@43-172.236.126.108:22-139.178.89.65:54160.service: Deactivated successfully. May 15 12:57:33.536411 systemd[1]: session-43.scope: Deactivated successfully. May 15 12:57:33.538908 systemd-logind[1531]: Session 43 logged out. Waiting for processes to exit. May 15 12:57:33.542978 systemd-logind[1531]: Removed session 43. May 15 12:57:35.859350 containerd[1555]: time="2025-05-15T12:57:35.859285872Z" level=warning msg="container event discarded" container=b0719da8fad8804e783a0ab693b7e7efe12701c9e96061b3d73fb5eb9a36f831 type=CONTAINER_CREATED_EVENT May 15 12:57:36.058675 containerd[1555]: time="2025-05-15T12:57:36.058619875Z" level=warning msg="container event discarded" container=b0719da8fad8804e783a0ab693b7e7efe12701c9e96061b3d73fb5eb9a36f831 type=CONTAINER_STARTED_EVENT May 15 12:57:38.027061 kubelet[2697]: E0515 12:57:38.026287 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:57:38.587460 systemd[1]: Started sshd@44-172.236.126.108:22-139.178.89.65:32808.service - OpenSSH per-connection server daemon (139.178.89.65:32808). May 15 12:57:38.929987 sshd[7274]: Accepted publickey for core from 139.178.89.65 port 32808 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:38.931601 sshd-session[7274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:38.937131 systemd-logind[1531]: New session 44 of user core. May 15 12:57:38.944790 systemd[1]: Started session-44.scope - Session 44 of User core. May 15 12:57:39.238676 sshd[7276]: Connection closed by 139.178.89.65 port 32808 May 15 12:57:39.240219 sshd-session[7274]: pam_unix(sshd:session): session closed for user core May 15 12:57:39.244575 systemd-logind[1531]: Session 44 logged out. Waiting for processes to exit. May 15 12:57:39.245452 systemd[1]: sshd@44-172.236.126.108:22-139.178.89.65:32808.service: Deactivated successfully. May 15 12:57:39.248522 systemd[1]: session-44.scope: Deactivated successfully. May 15 12:57:39.253781 systemd-logind[1531]: Removed session 44. May 15 12:57:40.109645 containerd[1555]: time="2025-05-15T12:57:40.109595291Z" level=warning msg="container event discarded" container=49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea type=CONTAINER_CREATED_EVENT May 15 12:57:40.109645 containerd[1555]: time="2025-05-15T12:57:40.109634331Z" level=warning msg="container event discarded" container=49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea type=CONTAINER_STARTED_EVENT May 15 12:57:40.150857 containerd[1555]: time="2025-05-15T12:57:40.150818329Z" level=warning msg="container event discarded" container=1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6 type=CONTAINER_CREATED_EVENT May 15 12:57:40.150857 containerd[1555]: time="2025-05-15T12:57:40.150852999Z" level=warning msg="container event discarded" container=1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6 type=CONTAINER_STARTED_EVENT May 15 12:57:44.016595 containerd[1555]: time="2025-05-15T12:57:44.016450100Z" level=warning msg="container event discarded" container=e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7 type=CONTAINER_CREATED_EVENT May 15 12:57:44.224677 containerd[1555]: time="2025-05-15T12:57:44.224591777Z" level=warning msg="container event discarded" container=e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7 type=CONTAINER_STARTED_EVENT May 15 12:57:44.298215 systemd[1]: Started sshd@45-172.236.126.108:22-139.178.89.65:32812.service - OpenSSH per-connection server daemon (139.178.89.65:32812). May 15 12:57:44.345992 containerd[1555]: time="2025-05-15T12:57:44.345930247Z" level=warning msg="container event discarded" container=e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7 type=CONTAINER_STOPPED_EVENT May 15 12:57:44.638607 sshd[7288]: Accepted publickey for core from 139.178.89.65 port 32812 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:44.640071 sshd-session[7288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:44.646359 systemd-logind[1531]: New session 45 of user core. May 15 12:57:44.653895 systemd[1]: Started session-45.scope - Session 45 of User core. May 15 12:57:44.835423 containerd[1555]: time="2025-05-15T12:57:44.835303668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"a7f3c54c7166b23c18e4b967f89c53a10ddae3a11f71904480200924bb2bd68c\" pid:7303 exited_at:{seconds:1747313864 nanos:834412944}" May 15 12:57:44.950720 sshd[7290]: Connection closed by 139.178.89.65 port 32812 May 15 12:57:44.951050 sshd-session[7288]: pam_unix(sshd:session): session closed for user core May 15 12:57:44.955516 systemd[1]: sshd@45-172.236.126.108:22-139.178.89.65:32812.service: Deactivated successfully. May 15 12:57:44.958039 systemd[1]: session-45.scope: Deactivated successfully. May 15 12:57:44.962715 systemd-logind[1531]: Session 45 logged out. Waiting for processes to exit. May 15 12:57:44.963816 systemd-logind[1531]: Removed session 45. May 15 12:57:45.508292 containerd[1555]: time="2025-05-15T12:57:45.508240404Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"64de87f580a3c3a4bdbcd6853d59934ea373a1e9f2606265300dd70bfb957619\" pid:7333 exited_at:{seconds:1747313865 nanos:507450431}" May 15 12:57:46.525065 containerd[1555]: time="2025-05-15T12:57:46.524994268Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"8f002b6b7c91232e5c9c7634a9d30aa646b80eaa6c78c2998de0f9c924471424\" pid:7356 exited_at:{seconds:1747313866 nanos:524773637}" May 15 12:57:48.028423 kubelet[2697]: E0515 12:57:48.028367 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:57:50.016948 systemd[1]: Started sshd@46-172.236.126.108:22-139.178.89.65:57168.service - OpenSSH per-connection server daemon (139.178.89.65:57168). May 15 12:57:50.356540 sshd[7367]: Accepted publickey for core from 139.178.89.65 port 57168 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:50.358507 sshd-session[7367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:50.364034 systemd-logind[1531]: New session 46 of user core. May 15 12:57:50.368714 systemd[1]: Started session-46.scope - Session 46 of User core. May 15 12:57:50.656984 sshd[7369]: Connection closed by 139.178.89.65 port 57168 May 15 12:57:50.658182 sshd-session[7367]: pam_unix(sshd:session): session closed for user core May 15 12:57:50.665858 systemd[1]: sshd@46-172.236.126.108:22-139.178.89.65:57168.service: Deactivated successfully. May 15 12:57:50.668794 systemd[1]: session-46.scope: Deactivated successfully. May 15 12:57:50.669704 systemd-logind[1531]: Session 46 logged out. Waiting for processes to exit. May 15 12:57:50.671841 systemd-logind[1531]: Removed session 46. May 15 12:57:50.931158 containerd[1555]: time="2025-05-15T12:57:50.931013209Z" level=warning msg="container event discarded" container=80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5 type=CONTAINER_CREATED_EVENT May 15 12:57:51.072536 containerd[1555]: time="2025-05-15T12:57:51.072472978Z" level=warning msg="container event discarded" container=80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5 type=CONTAINER_STARTED_EVENT May 15 12:57:55.718271 systemd[1]: Started sshd@47-172.236.126.108:22-139.178.89.65:57184.service - OpenSSH per-connection server daemon (139.178.89.65:57184). May 15 12:57:56.063725 sshd[7381]: Accepted publickey for core from 139.178.89.65 port 57184 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:56.065801 sshd-session[7381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:56.072323 systemd-logind[1531]: New session 47 of user core. May 15 12:57:56.078717 systemd[1]: Started session-47.scope - Session 47 of User core. May 15 12:57:56.373254 sshd[7383]: Connection closed by 139.178.89.65 port 57184 May 15 12:57:56.374175 sshd-session[7381]: pam_unix(sshd:session): session closed for user core May 15 12:57:56.379908 systemd[1]: sshd@47-172.236.126.108:22-139.178.89.65:57184.service: Deactivated successfully. May 15 12:57:56.382449 systemd[1]: session-47.scope: Deactivated successfully. May 15 12:57:56.384332 systemd-logind[1531]: Session 47 logged out. Waiting for processes to exit. May 15 12:57:56.386529 systemd-logind[1531]: Removed session 47. May 15 12:57:56.434575 systemd[1]: Started sshd@48-172.236.126.108:22-139.178.89.65:57194.service - OpenSSH per-connection server daemon (139.178.89.65:57194). May 15 12:57:56.771397 sshd[7395]: Accepted publickey for core from 139.178.89.65 port 57194 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:56.773353 sshd-session[7395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:56.779086 systemd-logind[1531]: New session 48 of user core. May 15 12:57:56.785687 systemd[1]: Started session-48.scope - Session 48 of User core. May 15 12:57:57.196036 sshd[7397]: Connection closed by 139.178.89.65 port 57194 May 15 12:57:57.196882 sshd-session[7395]: pam_unix(sshd:session): session closed for user core May 15 12:57:57.203045 systemd-logind[1531]: Session 48 logged out. Waiting for processes to exit. May 15 12:57:57.203806 systemd[1]: sshd@48-172.236.126.108:22-139.178.89.65:57194.service: Deactivated successfully. May 15 12:57:57.206154 systemd[1]: session-48.scope: Deactivated successfully. May 15 12:57:57.208526 systemd-logind[1531]: Removed session 48. May 15 12:57:57.262203 systemd[1]: Started sshd@49-172.236.126.108:22-139.178.89.65:53010.service - OpenSSH per-connection server daemon (139.178.89.65:53010). May 15 12:57:57.625199 sshd[7407]: Accepted publickey for core from 139.178.89.65 port 53010 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:57.627052 sshd-session[7407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:57.634514 systemd-logind[1531]: New session 49 of user core. May 15 12:57:57.637897 systemd[1]: Started session-49.scope - Session 49 of User core. May 15 12:57:58.521763 sshd[7409]: Connection closed by 139.178.89.65 port 53010 May 15 12:57:58.522459 sshd-session[7407]: pam_unix(sshd:session): session closed for user core May 15 12:57:58.527901 systemd-logind[1531]: Session 49 logged out. Waiting for processes to exit. May 15 12:57:58.528902 systemd[1]: sshd@49-172.236.126.108:22-139.178.89.65:53010.service: Deactivated successfully. May 15 12:57:58.532197 systemd[1]: session-49.scope: Deactivated successfully. May 15 12:57:58.534072 systemd-logind[1531]: Removed session 49. May 15 12:57:58.582297 systemd[1]: Started sshd@50-172.236.126.108:22-139.178.89.65:53014.service - OpenSSH per-connection server daemon (139.178.89.65:53014). May 15 12:57:58.930531 sshd[7428]: Accepted publickey for core from 139.178.89.65 port 53014 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:58.932173 sshd-session[7428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:58.938304 systemd-logind[1531]: New session 50 of user core. May 15 12:57:58.943823 systemd[1]: Started session-50.scope - Session 50 of User core. May 15 12:57:59.350542 sshd[7430]: Connection closed by 139.178.89.65 port 53014 May 15 12:57:59.351583 sshd-session[7428]: pam_unix(sshd:session): session closed for user core May 15 12:57:59.356837 systemd-logind[1531]: Session 50 logged out. Waiting for processes to exit. May 15 12:57:59.357687 systemd[1]: sshd@50-172.236.126.108:22-139.178.89.65:53014.service: Deactivated successfully. May 15 12:57:59.360281 systemd[1]: session-50.scope: Deactivated successfully. May 15 12:57:59.363118 systemd-logind[1531]: Removed session 50. May 15 12:57:59.411523 systemd[1]: Started sshd@51-172.236.126.108:22-139.178.89.65:53024.service - OpenSSH per-connection server daemon (139.178.89.65:53024). May 15 12:57:59.744795 sshd[7440]: Accepted publickey for core from 139.178.89.65 port 53024 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:57:59.746759 sshd-session[7440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:57:59.753601 systemd-logind[1531]: New session 51 of user core. May 15 12:57:59.771830 systemd[1]: Started session-51.scope - Session 51 of User core. May 15 12:58:00.057198 sshd[7442]: Connection closed by 139.178.89.65 port 53024 May 15 12:58:00.057969 sshd-session[7440]: pam_unix(sshd:session): session closed for user core May 15 12:58:00.063420 systemd[1]: sshd@51-172.236.126.108:22-139.178.89.65:53024.service: Deactivated successfully. May 15 12:58:00.067039 systemd[1]: session-51.scope: Deactivated successfully. May 15 12:58:00.068124 systemd-logind[1531]: Session 51 logged out. Waiting for processes to exit. May 15 12:58:00.070640 systemd-logind[1531]: Removed session 51. May 15 12:58:00.940441 systemd[1]: Started sshd@52-172.236.126.108:22-218.92.0.169:20930.service - OpenSSH per-connection server daemon (218.92.0.169:20930). May 15 12:58:05.123984 systemd[1]: Started sshd@53-172.236.126.108:22-139.178.89.65:53036.service - OpenSSH per-connection server daemon (139.178.89.65:53036). May 15 12:58:05.479938 sshd[7460]: Accepted publickey for core from 139.178.89.65 port 53036 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:58:05.481712 sshd-session[7460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:58:05.486481 systemd-logind[1531]: New session 52 of user core. May 15 12:58:05.492738 systemd[1]: Started session-52.scope - Session 52 of User core. May 15 12:58:05.787610 sshd[7462]: Connection closed by 139.178.89.65 port 53036 May 15 12:58:05.788469 sshd-session[7460]: pam_unix(sshd:session): session closed for user core May 15 12:58:05.792911 systemd-logind[1531]: Session 52 logged out. Waiting for processes to exit. May 15 12:58:05.794291 systemd[1]: sshd@53-172.236.126.108:22-139.178.89.65:53036.service: Deactivated successfully. May 15 12:58:05.797334 systemd[1]: session-52.scope: Deactivated successfully. May 15 12:58:05.798908 systemd-logind[1531]: Removed session 52. May 15 12:58:07.230458 sshd[7454]: Received disconnect from 218.92.0.169 port 20930:11: [preauth] May 15 12:58:07.230458 sshd[7454]: Disconnected from authenticating user root 218.92.0.169 port 20930 [preauth] May 15 12:58:07.233313 systemd[1]: sshd@52-172.236.126.108:22-218.92.0.169:20930.service: Deactivated successfully. May 15 12:58:09.017502 containerd[1555]: time="2025-05-15T12:58:09.017422508Z" level=warning msg="container event discarded" container=174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2 type=CONTAINER_CREATED_EVENT May 15 12:58:09.150772 containerd[1555]: time="2025-05-15T12:58:09.150711315Z" level=warning msg="container event discarded" container=174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2 type=CONTAINER_STARTED_EVENT May 15 12:58:10.850615 systemd[1]: Started sshd@54-172.236.126.108:22-139.178.89.65:41488.service - OpenSSH per-connection server daemon (139.178.89.65:41488). May 15 12:58:11.202265 sshd[7476]: Accepted publickey for core from 139.178.89.65 port 41488 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:58:11.203900 sshd-session[7476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:58:11.209255 systemd-logind[1531]: New session 53 of user core. May 15 12:58:11.213672 systemd[1]: Started session-53.scope - Session 53 of User core. May 15 12:58:11.504849 sshd[7478]: Connection closed by 139.178.89.65 port 41488 May 15 12:58:11.505590 sshd-session[7476]: pam_unix(sshd:session): session closed for user core May 15 12:58:11.510302 systemd[1]: sshd@54-172.236.126.108:22-139.178.89.65:41488.service: Deactivated successfully. May 15 12:58:11.512442 systemd[1]: session-53.scope: Deactivated successfully. May 15 12:58:11.514741 systemd-logind[1531]: Session 53 logged out. Waiting for processes to exit. May 15 12:58:11.516410 systemd-logind[1531]: Removed session 53. May 15 12:58:11.846048 containerd[1555]: time="2025-05-15T12:58:11.845968272Z" level=warning msg="container event discarded" container=174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2 type=CONTAINER_STOPPED_EVENT May 15 12:58:12.026757 kubelet[2697]: E0515 12:58:12.026395 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:58:15.498206 containerd[1555]: time="2025-05-15T12:58:15.498151184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"4dba84aec647676049b06f7a569ed8c495cf046a0b3195239ebc6c78c90a5c07\" pid:7502 exited_at:{seconds:1747313895 nanos:497451182}" May 15 12:58:16.470323 containerd[1555]: time="2025-05-15T12:58:16.470172351Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"06609a9c682c43635354110975cc3bfb70177655b5adb1d126b0a0e6984ae88c\" pid:7525 exited_at:{seconds:1747313896 nanos:469922840}" May 15 12:58:16.570329 systemd[1]: Started sshd@55-172.236.126.108:22-139.178.89.65:39330.service - OpenSSH per-connection server daemon (139.178.89.65:39330). May 15 12:58:16.916490 sshd[7535]: Accepted publickey for core from 139.178.89.65 port 39330 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:58:16.918248 sshd-session[7535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:58:16.923745 systemd-logind[1531]: New session 54 of user core. May 15 12:58:16.931701 systemd[1]: Started session-54.scope - Session 54 of User core. May 15 12:58:17.226309 sshd[7537]: Connection closed by 139.178.89.65 port 39330 May 15 12:58:17.226853 sshd-session[7535]: pam_unix(sshd:session): session closed for user core May 15 12:58:17.233017 systemd-logind[1531]: Session 54 logged out. Waiting for processes to exit. May 15 12:58:17.233169 systemd[1]: sshd@55-172.236.126.108:22-139.178.89.65:39330.service: Deactivated successfully. May 15 12:58:17.235782 systemd[1]: session-54.scope: Deactivated successfully. May 15 12:58:17.238447 systemd-logind[1531]: Removed session 54. May 15 12:58:21.488161 containerd[1555]: time="2025-05-15T12:58:21.488067617Z" level=warning msg="container event discarded" container=2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499 type=CONTAINER_CREATED_EVENT May 15 12:58:21.586397 containerd[1555]: time="2025-05-15T12:58:21.586308309Z" level=warning msg="container event discarded" container=2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499 type=CONTAINER_STARTED_EVENT May 15 12:58:22.288298 systemd[1]: Started sshd@56-172.236.126.108:22-139.178.89.65:39338.service - OpenSSH per-connection server daemon (139.178.89.65:39338). May 15 12:58:22.636881 sshd[7550]: Accepted publickey for core from 139.178.89.65 port 39338 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:58:22.638528 sshd-session[7550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:58:22.644263 systemd-logind[1531]: New session 55 of user core. May 15 12:58:22.649697 systemd[1]: Started session-55.scope - Session 55 of User core. May 15 12:58:22.938323 sshd[7552]: Connection closed by 139.178.89.65 port 39338 May 15 12:58:22.939398 sshd-session[7550]: pam_unix(sshd:session): session closed for user core May 15 12:58:22.944360 systemd-logind[1531]: Session 55 logged out. Waiting for processes to exit. May 15 12:58:22.945177 systemd[1]: sshd@56-172.236.126.108:22-139.178.89.65:39338.service: Deactivated successfully. May 15 12:58:22.947341 systemd[1]: session-55.scope: Deactivated successfully. May 15 12:58:22.949454 systemd-logind[1531]: Removed session 55. May 15 12:58:24.371534 containerd[1555]: time="2025-05-15T12:58:24.371440962Z" level=warning msg="container event discarded" container=d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598 type=CONTAINER_CREATED_EVENT May 15 12:58:24.371534 containerd[1555]: time="2025-05-15T12:58:24.371516152Z" level=warning msg="container event discarded" container=d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598 type=CONTAINER_STARTED_EVENT May 15 12:58:25.302800 containerd[1555]: time="2025-05-15T12:58:25.302723709Z" level=warning msg="container event discarded" container=db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498 type=CONTAINER_CREATED_EVENT May 15 12:58:25.302800 containerd[1555]: time="2025-05-15T12:58:25.302770359Z" level=warning msg="container event discarded" container=db6c6602dfc6f4e516fe9ea801f7ec749c33999ed0dc93561a47a2c504598498 type=CONTAINER_STARTED_EVENT May 15 12:58:25.331977 containerd[1555]: time="2025-05-15T12:58:25.331910387Z" level=warning msg="container event discarded" container=3418881682846ea5827bee45c44929cffefaafc0f5bb8dc5babea97029cb3fc3 type=CONTAINER_CREATED_EVENT May 15 12:58:25.417389 containerd[1555]: time="2025-05-15T12:58:25.417311115Z" level=warning msg="container event discarded" container=3418881682846ea5827bee45c44929cffefaafc0f5bb8dc5babea97029cb3fc3 type=CONTAINER_STARTED_EVENT May 15 12:58:27.025694 kubelet[2697]: E0515 12:58:27.025639 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:58:27.337988 containerd[1555]: time="2025-05-15T12:58:27.337881648Z" level=warning msg="container event discarded" container=0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d type=CONTAINER_CREATED_EVENT May 15 12:58:27.337988 containerd[1555]: time="2025-05-15T12:58:27.337966958Z" level=warning msg="container event discarded" container=0ae6bc1244f537358cb4f237c689b92089dcdf7fe035d749ef16447ef9ae813d type=CONTAINER_STARTED_EVENT May 15 12:58:27.469396 containerd[1555]: time="2025-05-15T12:58:27.469317654Z" level=warning msg="container event discarded" container=19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303 type=CONTAINER_CREATED_EVENT May 15 12:58:27.469396 containerd[1555]: time="2025-05-15T12:58:27.469386995Z" level=warning msg="container event discarded" container=19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303 type=CONTAINER_STARTED_EVENT May 15 12:58:28.005256 systemd[1]: Started sshd@57-172.236.126.108:22-139.178.89.65:33814.service - OpenSSH per-connection server daemon (139.178.89.65:33814). May 15 12:58:28.364838 sshd[7566]: Accepted publickey for core from 139.178.89.65 port 33814 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:58:28.366688 sshd-session[7566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:58:28.372032 systemd-logind[1531]: New session 56 of user core. May 15 12:58:28.379518 containerd[1555]: time="2025-05-15T12:58:28.379435626Z" level=warning msg="container event discarded" container=a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a type=CONTAINER_CREATED_EVENT May 15 12:58:28.379810 containerd[1555]: time="2025-05-15T12:58:28.379516106Z" level=warning msg="container event discarded" container=a4dfeb8da4eaa5f663ea204682c0f9431aabf39490cd03092b9c30d4eee3fd7a type=CONTAINER_STARTED_EVENT May 15 12:58:28.380689 systemd[1]: Started session-56.scope - Session 56 of User core. May 15 12:58:28.403760 containerd[1555]: time="2025-05-15T12:58:28.403696995Z" level=warning msg="container event discarded" container=dca997b8e044dfd6dd2e54856758785b557ba596c62d4a526aeaebbcbf9720ff type=CONTAINER_CREATED_EVENT May 15 12:58:28.506305 containerd[1555]: time="2025-05-15T12:58:28.506078513Z" level=warning msg="container event discarded" container=dca997b8e044dfd6dd2e54856758785b557ba596c62d4a526aeaebbcbf9720ff type=CONTAINER_STARTED_EVENT May 15 12:58:28.535420 containerd[1555]: time="2025-05-15T12:58:28.535357601Z" level=warning msg="container event discarded" container=52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755 type=CONTAINER_CREATED_EVENT May 15 12:58:28.535767 containerd[1555]: time="2025-05-15T12:58:28.535609302Z" level=warning msg="container event discarded" container=52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755 type=CONTAINER_STARTED_EVENT May 15 12:58:28.680004 sshd[7568]: Connection closed by 139.178.89.65 port 33814 May 15 12:58:28.680820 sshd-session[7566]: pam_unix(sshd:session): session closed for user core May 15 12:58:28.686355 systemd[1]: sshd@57-172.236.126.108:22-139.178.89.65:33814.service: Deactivated successfully. May 15 12:58:28.689158 systemd[1]: session-56.scope: Deactivated successfully. May 15 12:58:28.690264 systemd-logind[1531]: Session 56 logged out. Waiting for processes to exit. May 15 12:58:28.692673 systemd-logind[1531]: Removed session 56. May 15 12:58:29.328775 containerd[1555]: time="2025-05-15T12:58:29.328693721Z" level=warning msg="container event discarded" container=3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f type=CONTAINER_CREATED_EVENT May 15 12:58:29.328775 containerd[1555]: time="2025-05-15T12:58:29.328763741Z" level=warning msg="container event discarded" container=3805ceff950260177a9f6f8406b73ef51fb0ce0adfb91df53a20b50133a0956f type=CONTAINER_STARTED_EVENT May 15 12:58:31.803245 containerd[1555]: time="2025-05-15T12:58:31.802926652Z" level=warning msg="container event discarded" container=51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd type=CONTAINER_CREATED_EVENT May 15 12:58:31.888408 containerd[1555]: time="2025-05-15T12:58:31.888302934Z" level=warning msg="container event discarded" container=51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd type=CONTAINER_STARTED_EVENT May 15 12:58:32.026762 kubelet[2697]: E0515 12:58:32.026214 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:58:33.026523 kubelet[2697]: E0515 12:58:33.026468 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:58:33.742207 systemd[1]: Started sshd@58-172.236.126.108:22-139.178.89.65:33816.service - OpenSSH per-connection server daemon (139.178.89.65:33816). May 15 12:58:34.093941 sshd[7598]: Accepted publickey for core from 139.178.89.65 port 33816 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:58:34.096352 sshd-session[7598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:58:34.102856 systemd-logind[1531]: New session 57 of user core. May 15 12:58:34.111739 systemd[1]: Started session-57.scope - Session 57 of User core. May 15 12:58:34.403911 sshd[7600]: Connection closed by 139.178.89.65 port 33816 May 15 12:58:34.404835 sshd-session[7598]: pam_unix(sshd:session): session closed for user core May 15 12:58:34.411171 systemd[1]: sshd@58-172.236.126.108:22-139.178.89.65:33816.service: Deactivated successfully. May 15 12:58:34.413435 systemd[1]: session-57.scope: Deactivated successfully. May 15 12:58:34.414799 systemd-logind[1531]: Session 57 logged out. Waiting for processes to exit. May 15 12:58:34.416336 systemd-logind[1531]: Removed session 57. May 15 12:58:36.812976 containerd[1555]: time="2025-05-15T12:58:36.812890189Z" level=warning msg="container event discarded" container=75317d3f819c88d12536bdf74f3f32bf7d93e11e6e3e426068da446f06deacca type=CONTAINER_CREATED_EVENT May 15 12:58:36.899538 containerd[1555]: time="2025-05-15T12:58:36.899431970Z" level=warning msg="container event discarded" container=75317d3f819c88d12536bdf74f3f32bf7d93e11e6e3e426068da446f06deacca type=CONTAINER_STARTED_EVENT May 15 12:58:39.026016 kubelet[2697]: E0515 12:58:39.025950 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:58:39.469079 systemd[1]: Started sshd@59-172.236.126.108:22-139.178.89.65:43112.service - OpenSSH per-connection server daemon (139.178.89.65:43112). May 15 12:58:39.814524 sshd[7612]: Accepted publickey for core from 139.178.89.65 port 43112 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:58:39.816444 sshd-session[7612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:58:39.822410 systemd-logind[1531]: New session 58 of user core. May 15 12:58:39.828739 systemd[1]: Started session-58.scope - Session 58 of User core. May 15 12:58:40.117408 sshd[7614]: Connection closed by 139.178.89.65 port 43112 May 15 12:58:40.118222 sshd-session[7612]: pam_unix(sshd:session): session closed for user core May 15 12:58:40.123107 systemd[1]: sshd@59-172.236.126.108:22-139.178.89.65:43112.service: Deactivated successfully. May 15 12:58:40.125655 systemd[1]: session-58.scope: Deactivated successfully. May 15 12:58:40.126597 systemd-logind[1531]: Session 58 logged out. Waiting for processes to exit. May 15 12:58:40.128396 systemd-logind[1531]: Removed session 58. May 15 12:58:40.369704 containerd[1555]: time="2025-05-15T12:58:40.369463359Z" level=warning msg="container event discarded" container=2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499 type=CONTAINER_STOPPED_EVENT May 15 12:58:40.428919 containerd[1555]: time="2025-05-15T12:58:40.428838069Z" level=warning msg="container event discarded" container=49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea type=CONTAINER_STOPPED_EVENT May 15 12:58:40.863322 containerd[1555]: time="2025-05-15T12:58:40.863215293Z" level=warning msg="container event discarded" container=89142fcf6da934e8d0b1dda0e615fcf5bf32c0b4c19bd15e5f7d6ffc5611b81f type=CONTAINER_CREATED_EVENT May 15 12:58:40.863322 containerd[1555]: time="2025-05-15T12:58:40.863304473Z" level=warning msg="container event discarded" container=89142fcf6da934e8d0b1dda0e615fcf5bf32c0b4c19bd15e5f7d6ffc5611b81f type=CONTAINER_STARTED_EVENT May 15 12:58:40.883545 containerd[1555]: time="2025-05-15T12:58:40.883500805Z" level=warning msg="container event discarded" container=4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742 type=CONTAINER_CREATED_EVENT May 15 12:58:40.957786 containerd[1555]: time="2025-05-15T12:58:40.957699269Z" level=warning msg="container event discarded" container=4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742 type=CONTAINER_STARTED_EVENT May 15 12:58:41.019166 containerd[1555]: time="2025-05-15T12:58:41.019081827Z" level=warning msg="container event discarded" container=4569c7d4f80444235198e23a8d030c87dce575bd4903d90715354ca61511d742 type=CONTAINER_STOPPED_EVENT May 15 12:58:41.362331 containerd[1555]: time="2025-05-15T12:58:41.362226053Z" level=warning msg="container event discarded" container=2a5b67820bbf6958a58f6433046c0112bbd0298ca1ca9e0d60716180f96a8499 type=CONTAINER_DELETED_EVENT May 15 12:58:41.406620 containerd[1555]: time="2025-05-15T12:58:41.406512280Z" level=warning msg="container event discarded" container=174ceb236b3097feee66519e5c9732e4bef296cb46895e805a16e07be34fa0b2 type=CONTAINER_DELETED_EVENT May 15 12:58:41.420814 containerd[1555]: time="2025-05-15T12:58:41.420759161Z" level=warning msg="container event discarded" container=8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6 type=CONTAINER_CREATED_EVENT May 15 12:58:41.420814 containerd[1555]: time="2025-05-15T12:58:41.420796311Z" level=warning msg="container event discarded" container=e19f29b28b4bbe1ccbeadb52733c6febb7cac8850d9b2f3c5307ea3456032ef7 type=CONTAINER_DELETED_EVENT May 15 12:58:41.466100 containerd[1555]: time="2025-05-15T12:58:41.466014571Z" level=warning msg="container event discarded" container=80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5 type=CONTAINER_STOPPED_EVENT May 15 12:58:41.541491 containerd[1555]: time="2025-05-15T12:58:41.541436968Z" level=warning msg="container event discarded" container=1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6 type=CONTAINER_STOPPED_EVENT May 15 12:58:41.541491 containerd[1555]: time="2025-05-15T12:58:41.541486278Z" level=warning msg="container event discarded" container=8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6 type=CONTAINER_STARTED_EVENT May 15 12:58:42.200693 containerd[1555]: time="2025-05-15T12:58:42.200550662Z" level=warning msg="container event discarded" container=a5009569b6c37745c600bff57670ceab1d79fe4166ee76ebf5ee7440d882ae0d type=CONTAINER_CREATED_EVENT May 15 12:58:42.200693 containerd[1555]: time="2025-05-15T12:58:42.200658842Z" level=warning msg="container event discarded" container=a5009569b6c37745c600bff57670ceab1d79fe4166ee76ebf5ee7440d882ae0d type=CONTAINER_STARTED_EVENT May 15 12:58:42.231919 containerd[1555]: time="2025-05-15T12:58:42.231874762Z" level=warning msg="container event discarded" container=1c6a34f3d4cab8eb1e238264881cba0fd07664d195c442b7bceb0cbc403d1542 type=CONTAINER_CREATED_EVENT May 15 12:58:42.326334 containerd[1555]: time="2025-05-15T12:58:42.326255906Z" level=warning msg="container event discarded" container=1c6a34f3d4cab8eb1e238264881cba0fd07664d195c442b7bceb0cbc403d1542 type=CONTAINER_STARTED_EVENT May 15 12:58:42.370643 containerd[1555]: time="2025-05-15T12:58:42.370533642Z" level=warning msg="container event discarded" container=80c0e2785bc596ee5007d4ee9632662e0c6d3406be6d7cc95a90acdeb1d19ed5 type=CONTAINER_DELETED_EVENT May 15 12:58:42.616166 containerd[1555]: time="2025-05-15T12:58:42.616094610Z" level=warning msg="container event discarded" container=8efd8e35df5668adf093de9539b5282aa9ec327cc92feaf9855e7a56b1e1d3c6 type=CONTAINER_STOPPED_EVENT May 15 12:58:43.068373 containerd[1555]: time="2025-05-15T12:58:43.068180047Z" level=warning msg="container event discarded" container=7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a type=CONTAINER_CREATED_EVENT May 15 12:58:43.145719 containerd[1555]: time="2025-05-15T12:58:43.145675930Z" level=warning msg="container event discarded" container=7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a type=CONTAINER_STARTED_EVENT May 15 12:58:43.482590 containerd[1555]: time="2025-05-15T12:58:43.482350837Z" level=warning msg="container event discarded" container=57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c type=CONTAINER_CREATED_EVENT May 15 12:58:43.594925 containerd[1555]: time="2025-05-15T12:58:43.594838193Z" level=warning msg="container event discarded" container=7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a type=CONTAINER_STOPPED_EVENT May 15 12:58:43.670430 containerd[1555]: time="2025-05-15T12:58:43.670126568Z" level=warning msg="container event discarded" container=2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387 type=CONTAINER_CREATED_EVENT May 15 12:58:43.726643 containerd[1555]: time="2025-05-15T12:58:43.726596257Z" level=warning msg="container event discarded" container=57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c type=CONTAINER_STARTED_EVENT May 15 12:58:43.726643 containerd[1555]: time="2025-05-15T12:58:43.726640987Z" level=warning msg="container event discarded" container=19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303 type=CONTAINER_STOPPED_EVENT May 15 12:58:43.918285 containerd[1555]: time="2025-05-15T12:58:43.918195033Z" level=warning msg="container event discarded" container=2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387 type=CONTAINER_STARTED_EVENT May 15 12:58:44.422583 containerd[1555]: time="2025-05-15T12:58:44.422468216Z" level=warning msg="container event discarded" container=7f2cc0c30d0888aa8dbc0d88a4150bfff821f754958b77a2a9fb4b0c9817e98a type=CONTAINER_DELETED_EVENT May 15 12:58:44.826698 containerd[1555]: time="2025-05-15T12:58:44.826651287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"9779fca3a673d6a0fca85e88f3d8cda7da82bde70c1d1eecafae0ffab0596a33\" pid:7637 exited_at:{seconds:1747313924 nanos:826348816}" May 15 12:58:45.026204 kubelet[2697]: E0515 12:58:45.026160 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:58:45.088989 containerd[1555]: time="2025-05-15T12:58:45.088837648Z" level=warning msg="container event discarded" container=de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49 type=CONTAINER_CREATED_EVENT May 15 12:58:45.088989 containerd[1555]: time="2025-05-15T12:58:45.088880788Z" level=warning msg="container event discarded" container=de52dc453dcd8133311420d6cd2510546771dbb4fd2a31629e5279e067e11c49 type=CONTAINER_STARTED_EVENT May 15 12:58:45.113168 containerd[1555]: time="2025-05-15T12:58:45.113134483Z" level=warning msg="container event discarded" container=27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0 type=CONTAINER_CREATED_EVENT May 15 12:58:45.185436 systemd[1]: Started sshd@60-172.236.126.108:22-139.178.89.65:43118.service - OpenSSH per-connection server daemon (139.178.89.65:43118). May 15 12:58:45.197771 containerd[1555]: time="2025-05-15T12:58:45.197515169Z" level=warning msg="container event discarded" container=27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0 type=CONTAINER_STARTED_EVENT May 15 12:58:45.498249 containerd[1555]: time="2025-05-15T12:58:45.498095733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"32322cb9803358e3e7b31754fb424b79fdd2c7f6e12b6219296c60c56d30c65d\" pid:7662 exited_at:{seconds:1747313925 nanos:497724732}" May 15 12:58:45.523230 sshd[7647]: Accepted publickey for core from 139.178.89.65 port 43118 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:58:45.524905 sshd-session[7647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:58:45.530475 systemd-logind[1531]: New session 59 of user core. May 15 12:58:45.534691 systemd[1]: Started session-59.scope - Session 59 of User core. May 15 12:58:45.836355 sshd[7673]: Connection closed by 139.178.89.65 port 43118 May 15 12:58:45.837147 sshd-session[7647]: pam_unix(sshd:session): session closed for user core May 15 12:58:45.843257 systemd-logind[1531]: Session 59 logged out. Waiting for processes to exit. May 15 12:58:45.843457 systemd[1]: sshd@60-172.236.126.108:22-139.178.89.65:43118.service: Deactivated successfully. May 15 12:58:45.846363 systemd[1]: session-59.scope: Deactivated successfully. May 15 12:58:45.849501 systemd-logind[1531]: Removed session 59. May 15 12:58:46.026338 kubelet[2697]: E0515 12:58:46.026265 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:58:46.145267 containerd[1555]: time="2025-05-15T12:58:46.145090390Z" level=warning msg="container event discarded" container=fd5bab077448994b4638780bb86d85128fb6ccbcddadf5473b1fceaed62567de type=CONTAINER_CREATED_EVENT May 15 12:58:46.270138 containerd[1555]: time="2025-05-15T12:58:46.270048917Z" level=warning msg="container event discarded" container=fd5bab077448994b4638780bb86d85128fb6ccbcddadf5473b1fceaed62567de type=CONTAINER_STARTED_EVENT May 15 12:58:46.474294 containerd[1555]: time="2025-05-15T12:58:46.474127620Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"de3476fa1997580ad0992fc20eff0f388b73214a34c4d068d30a00a196d7e949\" pid:7697 exited_at:{seconds:1747313926 nanos:473484138}" May 15 12:58:50.899278 systemd[1]: Started sshd@61-172.236.126.108:22-139.178.89.65:35802.service - OpenSSH per-connection server daemon (139.178.89.65:35802). May 15 12:58:51.189193 containerd[1555]: time="2025-05-15T12:58:51.188966042Z" level=warning msg="container event discarded" container=2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387 type=CONTAINER_STOPPED_EVENT May 15 12:58:51.234997 sshd[7708]: Accepted publickey for core from 139.178.89.65 port 35802 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:58:51.237855 sshd-session[7708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:58:51.241179 containerd[1555]: time="2025-05-15T12:58:51.241109792Z" level=warning msg="container event discarded" container=52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755 type=CONTAINER_STOPPED_EVENT May 15 12:58:51.244519 systemd-logind[1531]: New session 60 of user core. May 15 12:58:51.254732 systemd[1]: Started session-60.scope - Session 60 of User core. May 15 12:58:51.463060 containerd[1555]: time="2025-05-15T12:58:51.462614827Z" level=warning msg="container event discarded" container=2d0c8afaf6a98bb45212a452696cc415da3e7c2bbf763aebfc42d9c536fbe387 type=CONTAINER_DELETED_EVENT May 15 12:58:51.533065 sshd[7710]: Connection closed by 139.178.89.65 port 35802 May 15 12:58:51.534037 sshd-session[7708]: pam_unix(sshd:session): session closed for user core May 15 12:58:51.538522 systemd[1]: sshd@61-172.236.126.108:22-139.178.89.65:35802.service: Deactivated successfully. May 15 12:58:51.541061 systemd[1]: session-60.scope: Deactivated successfully. May 15 12:58:51.544589 systemd-logind[1531]: Session 60 logged out. Waiting for processes to exit. May 15 12:58:51.546821 systemd-logind[1531]: Removed session 60. May 15 12:58:51.720422 containerd[1555]: time="2025-05-15T12:58:51.720117346Z" level=warning msg="container event discarded" container=ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5 type=CONTAINER_CREATED_EVENT May 15 12:58:51.720422 containerd[1555]: time="2025-05-15T12:58:51.720292767Z" level=warning msg="container event discarded" container=ceac3ef61f12d8ac2c2685995eb69247b8d06400eecadbffcf5aaa787a2b87c5 type=CONTAINER_STARTED_EVENT May 15 12:58:51.740601 containerd[1555]: time="2025-05-15T12:58:51.740504887Z" level=warning msg="container event discarded" container=df30cb51887a8532ba15e3887ddf27015e3f569e4e10bd6d71e3c10c7b625b98 type=CONTAINER_CREATED_EVENT May 15 12:58:51.837892 containerd[1555]: time="2025-05-15T12:58:51.837817123Z" level=warning msg="container event discarded" container=df30cb51887a8532ba15e3887ddf27015e3f569e4e10bd6d71e3c10c7b625b98 type=CONTAINER_STARTED_EVENT May 15 12:58:52.809949 containerd[1555]: time="2025-05-15T12:58:52.809826623Z" level=warning msg="container event discarded" container=51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd type=CONTAINER_STOPPED_EVENT May 15 12:58:52.888351 containerd[1555]: time="2025-05-15T12:58:52.888276613Z" level=warning msg="container event discarded" container=d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598 type=CONTAINER_STOPPED_EVENT May 15 12:58:53.244110 containerd[1555]: time="2025-05-15T12:58:53.244017656Z" level=warning msg="container event discarded" container=4b9c0c21eecab2eaee93cfaeea448122a938d6ce55d2bf5fa8f7a897545a9f8f type=CONTAINER_CREATED_EVENT May 15 12:58:53.336732 containerd[1555]: time="2025-05-15T12:58:53.336639073Z" level=warning msg="container event discarded" container=4b9c0c21eecab2eaee93cfaeea448122a938d6ce55d2bf5fa8f7a897545a9f8f type=CONTAINER_STARTED_EVENT May 15 12:58:53.492399 containerd[1555]: time="2025-05-15T12:58:53.492092808Z" level=warning msg="container event discarded" container=51e94d8ec2eeed000815060fe785bca6fd2e089442f362a8979f2fc3533a68cd type=CONTAINER_DELETED_EVENT May 15 12:58:56.602264 systemd[1]: Started sshd@62-172.236.126.108:22-139.178.89.65:49468.service - OpenSSH per-connection server daemon (139.178.89.65:49468). May 15 12:58:56.949151 sshd[7722]: Accepted publickey for core from 139.178.89.65 port 49468 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:58:56.950944 sshd-session[7722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:58:56.956786 systemd-logind[1531]: New session 61 of user core. May 15 12:58:56.960707 systemd[1]: Started session-61.scope - Session 61 of User core. May 15 12:58:57.255480 sshd[7724]: Connection closed by 139.178.89.65 port 49468 May 15 12:58:57.256282 sshd-session[7722]: pam_unix(sshd:session): session closed for user core May 15 12:58:57.262795 systemd[1]: sshd@62-172.236.126.108:22-139.178.89.65:49468.service: Deactivated successfully. May 15 12:58:57.265198 systemd[1]: session-61.scope: Deactivated successfully. May 15 12:58:57.268233 systemd-logind[1531]: Session 61 logged out. Waiting for processes to exit. May 15 12:58:57.269376 systemd-logind[1531]: Removed session 61. May 15 12:59:02.324479 systemd[1]: Started sshd@63-172.236.126.108:22-139.178.89.65:49478.service - OpenSSH per-connection server daemon (139.178.89.65:49478). May 15 12:59:02.661823 sshd[7738]: Accepted publickey for core from 139.178.89.65 port 49478 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:59:02.663586 sshd-session[7738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:59:02.669875 systemd-logind[1531]: New session 62 of user core. May 15 12:59:02.673675 systemd[1]: Started session-62.scope - Session 62 of User core. May 15 12:59:02.961986 sshd[7740]: Connection closed by 139.178.89.65 port 49478 May 15 12:59:02.962231 sshd-session[7738]: pam_unix(sshd:session): session closed for user core May 15 12:59:02.968302 systemd-logind[1531]: Session 62 logged out. Waiting for processes to exit. May 15 12:59:02.969683 systemd[1]: sshd@63-172.236.126.108:22-139.178.89.65:49478.service: Deactivated successfully. May 15 12:59:02.972037 systemd[1]: session-62.scope: Deactivated successfully. May 15 12:59:02.973620 systemd-logind[1531]: Removed session 62. May 15 12:59:08.023325 systemd[1]: Started sshd@64-172.236.126.108:22-139.178.89.65:47972.service - OpenSSH per-connection server daemon (139.178.89.65:47972). May 15 12:59:08.373608 sshd[7759]: Accepted publickey for core from 139.178.89.65 port 47972 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:59:08.376059 sshd-session[7759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:59:08.383507 systemd-logind[1531]: New session 63 of user core. May 15 12:59:08.390726 systemd[1]: Started session-63.scope - Session 63 of User core. May 15 12:59:08.678282 sshd[7762]: Connection closed by 139.178.89.65 port 47972 May 15 12:59:08.679498 sshd-session[7759]: pam_unix(sshd:session): session closed for user core May 15 12:59:08.685285 systemd[1]: sshd@64-172.236.126.108:22-139.178.89.65:47972.service: Deactivated successfully. May 15 12:59:08.687734 systemd[1]: session-63.scope: Deactivated successfully. May 15 12:59:08.689433 systemd-logind[1531]: Session 63 logged out. Waiting for processes to exit. May 15 12:59:08.690974 systemd-logind[1531]: Removed session 63. May 15 12:59:10.029900 kubelet[2697]: E0515 12:59:10.029853 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:59:13.742664 systemd[1]: Started sshd@65-172.236.126.108:22-139.178.89.65:47978.service - OpenSSH per-connection server daemon (139.178.89.65:47978). May 15 12:59:14.084890 sshd[7776]: Accepted publickey for core from 139.178.89.65 port 47978 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:59:14.086967 sshd-session[7776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:59:14.093801 systemd-logind[1531]: New session 64 of user core. May 15 12:59:14.098746 systemd[1]: Started session-64.scope - Session 64 of User core. May 15 12:59:14.413476 sshd[7778]: Connection closed by 139.178.89.65 port 47978 May 15 12:59:14.414122 sshd-session[7776]: pam_unix(sshd:session): session closed for user core May 15 12:59:14.418806 systemd-logind[1531]: Session 64 logged out. Waiting for processes to exit. May 15 12:59:14.419446 systemd[1]: sshd@65-172.236.126.108:22-139.178.89.65:47978.service: Deactivated successfully. May 15 12:59:14.425863 systemd[1]: session-64.scope: Deactivated successfully. May 15 12:59:14.427613 systemd-logind[1531]: Removed session 64. May 15 12:59:15.499094 containerd[1555]: time="2025-05-15T12:59:15.499046422Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"61a96854a7def419b105704c49b479bd19f07c058cc609dc7e5a50b5b02f3930\" pid:7801 exited_at:{seconds:1747313955 nanos:498501260}" May 15 12:59:16.469512 containerd[1555]: time="2025-05-15T12:59:16.469473377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"d308254a3440416488c27554ecdc0ca401423be78f50dd0f54cf785f1e105000\" pid:7825 exited_at:{seconds:1747313956 nanos:469265556}" May 15 12:59:19.474031 systemd[1]: Started sshd@66-172.236.126.108:22-139.178.89.65:41000.service - OpenSSH per-connection server daemon (139.178.89.65:41000). May 15 12:59:19.818357 sshd[7835]: Accepted publickey for core from 139.178.89.65 port 41000 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:59:19.819799 sshd-session[7835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:59:19.824958 systemd-logind[1531]: New session 65 of user core. May 15 12:59:19.831701 systemd[1]: Started session-65.scope - Session 65 of User core. May 15 12:59:20.120911 sshd[7837]: Connection closed by 139.178.89.65 port 41000 May 15 12:59:20.121517 sshd-session[7835]: pam_unix(sshd:session): session closed for user core May 15 12:59:20.125585 systemd-logind[1531]: Session 65 logged out. Waiting for processes to exit. May 15 12:59:20.126828 systemd[1]: sshd@66-172.236.126.108:22-139.178.89.65:41000.service: Deactivated successfully. May 15 12:59:20.129067 systemd[1]: session-65.scope: Deactivated successfully. May 15 12:59:20.133896 systemd-logind[1531]: Removed session 65. May 15 12:59:25.181265 systemd[1]: Started sshd@67-172.236.126.108:22-139.178.89.65:41012.service - OpenSSH per-connection server daemon (139.178.89.65:41012). May 15 12:59:25.521919 sshd[7849]: Accepted publickey for core from 139.178.89.65 port 41012 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:59:25.523597 sshd-session[7849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:59:25.529212 systemd-logind[1531]: New session 66 of user core. May 15 12:59:25.533707 systemd[1]: Started session-66.scope - Session 66 of User core. May 15 12:59:25.814713 sshd[7851]: Connection closed by 139.178.89.65 port 41012 May 15 12:59:25.814988 sshd-session[7849]: pam_unix(sshd:session): session closed for user core May 15 12:59:25.819463 systemd[1]: sshd@67-172.236.126.108:22-139.178.89.65:41012.service: Deactivated successfully. May 15 12:59:25.821757 systemd[1]: session-66.scope: Deactivated successfully. May 15 12:59:25.822778 systemd-logind[1531]: Session 66 logged out. Waiting for processes to exit. May 15 12:59:25.824180 systemd-logind[1531]: Removed session 66. May 15 12:59:26.052840 containerd[1555]: time="2025-05-15T12:59:26.052773636Z" level=warning msg="container event discarded" container=49fcf472cad55ae4e78be8b793d4d7d59f99021dd061483483a569df6515edea type=CONTAINER_DELETED_EVENT May 15 12:59:26.320902 containerd[1555]: time="2025-05-15T12:59:26.320821351Z" level=warning msg="container event discarded" container=19a1820cf5bf1d2943c1189593463b4290b233a81be0312e047ff677168ed303 type=CONTAINER_DELETED_EVENT May 15 12:59:26.570525 containerd[1555]: time="2025-05-15T12:59:26.570445446Z" level=warning msg="container event discarded" container=52d9f7b0042d7afc35300cee714f8e26da0dc95c46247dfec3ed25f621dec755 type=CONTAINER_DELETED_EVENT May 15 12:59:26.760140 containerd[1555]: time="2025-05-15T12:59:26.759992938Z" level=warning msg="container event discarded" container=d9e387df7bd17081c21258369d1932bdab0b30000d714170ec4a7947faf10598 type=CONTAINER_DELETED_EVENT May 15 12:59:26.760140 containerd[1555]: time="2025-05-15T12:59:26.760043008Z" level=warning msg="container event discarded" container=1d5f6067deb2929fc6fc15c5a6fa3ca8cc32001e19138e242466b4ea313603b6 type=CONTAINER_DELETED_EVENT May 15 12:59:30.883748 systemd[1]: Started sshd@68-172.236.126.108:22-139.178.89.65:43586.service - OpenSSH per-connection server daemon (139.178.89.65:43586). May 15 12:59:31.222307 sshd[7865]: Accepted publickey for core from 139.178.89.65 port 43586 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:59:31.224091 sshd-session[7865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:59:31.229748 systemd-logind[1531]: New session 67 of user core. May 15 12:59:31.236682 systemd[1]: Started session-67.scope - Session 67 of User core. May 15 12:59:31.529992 sshd[7867]: Connection closed by 139.178.89.65 port 43586 May 15 12:59:31.530987 sshd-session[7865]: pam_unix(sshd:session): session closed for user core May 15 12:59:31.535929 systemd-logind[1531]: Session 67 logged out. Waiting for processes to exit. May 15 12:59:31.536952 systemd[1]: sshd@68-172.236.126.108:22-139.178.89.65:43586.service: Deactivated successfully. May 15 12:59:31.540350 systemd[1]: session-67.scope: Deactivated successfully. May 15 12:59:31.543419 systemd-logind[1531]: Removed session 67. May 15 12:59:33.025706 kubelet[2697]: E0515 12:59:33.025672 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:59:35.666030 update_engine[1532]: I20250515 12:59:35.665966 1532 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 12:59:35.666030 update_engine[1532]: I20250515 12:59:35.666016 1532 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 12:59:35.666493 update_engine[1532]: I20250515 12:59:35.666302 1532 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 12:59:35.666876 update_engine[1532]: I20250515 12:59:35.666833 1532 omaha_request_params.cc:62] Current group set to developer May 15 12:59:35.667575 update_engine[1532]: I20250515 12:59:35.667503 1532 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 12:59:35.667575 update_engine[1532]: I20250515 12:59:35.667522 1532 update_attempter.cc:643] Scheduling an action processor start. May 15 12:59:35.667575 update_engine[1532]: I20250515 12:59:35.667539 1532 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 12:59:35.668583 update_engine[1532]: I20250515 12:59:35.667733 1532 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 12:59:35.668583 update_engine[1532]: I20250515 12:59:35.667806 1532 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 12:59:35.668583 update_engine[1532]: I20250515 12:59:35.667817 1532 omaha_request_action.cc:272] Request: May 15 12:59:35.668583 update_engine[1532]: May 15 12:59:35.668583 update_engine[1532]: May 15 12:59:35.668583 update_engine[1532]: May 15 12:59:35.668583 update_engine[1532]: May 15 12:59:35.668583 update_engine[1532]: May 15 12:59:35.668583 update_engine[1532]: May 15 12:59:35.668583 update_engine[1532]: May 15 12:59:35.668583 update_engine[1532]: May 15 12:59:35.668583 update_engine[1532]: I20250515 12:59:35.667824 1532 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 12:59:35.671005 locksmithd[1568]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 12:59:35.671955 update_engine[1532]: I20250515 12:59:35.671929 1532 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 12:59:35.672966 update_engine[1532]: I20250515 12:59:35.672919 1532 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 12:59:35.726045 update_engine[1532]: E20250515 12:59:35.725978 1532 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 12:59:35.726169 update_engine[1532]: I20250515 12:59:35.726081 1532 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 12:59:36.596326 systemd[1]: Started sshd@69-172.236.126.108:22-139.178.89.65:48478.service - OpenSSH per-connection server daemon (139.178.89.65:48478). May 15 12:59:36.932067 sshd[7881]: Accepted publickey for core from 139.178.89.65 port 48478 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:59:36.934088 sshd-session[7881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:59:36.939490 systemd-logind[1531]: New session 68 of user core. May 15 12:59:36.944685 systemd[1]: Started session-68.scope - Session 68 of User core. May 15 12:59:37.237345 sshd[7883]: Connection closed by 139.178.89.65 port 48478 May 15 12:59:37.237991 sshd-session[7881]: pam_unix(sshd:session): session closed for user core May 15 12:59:37.243625 systemd-logind[1531]: Session 68 logged out. Waiting for processes to exit. May 15 12:59:37.244102 systemd[1]: sshd@69-172.236.126.108:22-139.178.89.65:48478.service: Deactivated successfully. May 15 12:59:37.246351 systemd[1]: session-68.scope: Deactivated successfully. May 15 12:59:37.248371 systemd-logind[1531]: Removed session 68. May 15 12:59:38.026506 kubelet[2697]: E0515 12:59:38.026140 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:59:40.026989 kubelet[2697]: E0515 12:59:40.026287 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:59:42.298905 systemd[1]: Started sshd@70-172.236.126.108:22-139.178.89.65:48480.service - OpenSSH per-connection server daemon (139.178.89.65:48480). May 15 12:59:42.637021 sshd[7895]: Accepted publickey for core from 139.178.89.65 port 48480 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:59:42.638801 sshd-session[7895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:59:42.644605 systemd-logind[1531]: New session 69 of user core. May 15 12:59:42.649720 systemd[1]: Started session-69.scope - Session 69 of User core. May 15 12:59:42.940200 sshd[7897]: Connection closed by 139.178.89.65 port 48480 May 15 12:59:42.941153 sshd-session[7895]: pam_unix(sshd:session): session closed for user core May 15 12:59:42.946011 systemd[1]: sshd@70-172.236.126.108:22-139.178.89.65:48480.service: Deactivated successfully. May 15 12:59:42.949118 systemd[1]: session-69.scope: Deactivated successfully. May 15 12:59:42.950714 systemd-logind[1531]: Session 69 logged out. Waiting for processes to exit. May 15 12:59:42.953713 systemd-logind[1531]: Removed session 69. May 15 12:59:44.828277 containerd[1555]: time="2025-05-15T12:59:44.828216141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"3553ba1f64a532275f332ce1a4fbcff72dc21f349428c97b3989d22ddd50b885\" pid:7920 exited_at:{seconds:1747313984 nanos:827966160}" May 15 12:59:45.489896 containerd[1555]: time="2025-05-15T12:59:45.489814903Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"240bbdc1b08ae868db9297727ee32e162bb7ceeec77c60d709cdd1821b918afe\" pid:7941 exited_at:{seconds:1747313985 nanos:489106701}" May 15 12:59:45.668470 update_engine[1532]: I20250515 12:59:45.668351 1532 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 12:59:45.668990 update_engine[1532]: I20250515 12:59:45.668753 1532 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 12:59:45.669186 update_engine[1532]: I20250515 12:59:45.669137 1532 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 12:59:45.670322 update_engine[1532]: E20250515 12:59:45.670215 1532 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 12:59:45.670322 update_engine[1532]: I20250515 12:59:45.670331 1532 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 12:59:46.469708 containerd[1555]: time="2025-05-15T12:59:46.469633764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"59cc14e303344e3ea3b7a2eba63703f38a89c75d6d5845adb1934d9b56469bfb\" pid:7965 exited_at:{seconds:1747313986 nanos:469309193}" May 15 12:59:48.012749 systemd[1]: Started sshd@71-172.236.126.108:22-139.178.89.65:42346.service - OpenSSH per-connection server daemon (139.178.89.65:42346). May 15 12:59:48.359395 sshd[7975]: Accepted publickey for core from 139.178.89.65 port 42346 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:59:48.361009 sshd-session[7975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:59:48.365464 systemd-logind[1531]: New session 70 of user core. May 15 12:59:48.373684 systemd[1]: Started session-70.scope - Session 70 of User core. May 15 12:59:48.691662 sshd[7977]: Connection closed by 139.178.89.65 port 42346 May 15 12:59:48.692408 sshd-session[7975]: pam_unix(sshd:session): session closed for user core May 15 12:59:48.697067 systemd[1]: sshd@71-172.236.126.108:22-139.178.89.65:42346.service: Deactivated successfully. May 15 12:59:48.700282 systemd[1]: session-70.scope: Deactivated successfully. May 15 12:59:48.701958 systemd-logind[1531]: Session 70 logged out. Waiting for processes to exit. May 15 12:59:48.703024 systemd-logind[1531]: Removed session 70. May 15 12:59:50.026513 kubelet[2697]: E0515 12:59:50.026067 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:59:51.026072 kubelet[2697]: E0515 12:59:51.026039 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:59:53.754839 systemd[1]: Started sshd@72-172.236.126.108:22-139.178.89.65:42360.service - OpenSSH per-connection server daemon (139.178.89.65:42360). May 15 12:59:54.105887 sshd[7989]: Accepted publickey for core from 139.178.89.65 port 42360 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:59:54.107740 sshd-session[7989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:59:54.112623 systemd-logind[1531]: New session 71 of user core. May 15 12:59:54.118701 systemd[1]: Started session-71.scope - Session 71 of User core. May 15 12:59:54.407408 sshd[7991]: Connection closed by 139.178.89.65 port 42360 May 15 12:59:54.407632 sshd-session[7989]: pam_unix(sshd:session): session closed for user core May 15 12:59:54.413154 systemd[1]: sshd@72-172.236.126.108:22-139.178.89.65:42360.service: Deactivated successfully. May 15 12:59:54.413648 systemd-logind[1531]: Session 71 logged out. Waiting for processes to exit. May 15 12:59:54.415359 systemd[1]: session-71.scope: Deactivated successfully. May 15 12:59:54.417431 systemd-logind[1531]: Removed session 71. May 15 12:59:55.666961 update_engine[1532]: I20250515 12:59:55.666850 1532 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 12:59:55.667413 update_engine[1532]: I20250515 12:59:55.667234 1532 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 12:59:55.667603 update_engine[1532]: I20250515 12:59:55.667526 1532 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 12:59:55.668166 update_engine[1532]: E20250515 12:59:55.668135 1532 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 12:59:55.668196 update_engine[1532]: I20250515 12:59:55.668183 1532 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 12:59:58.025584 kubelet[2697]: E0515 12:59:58.025502 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:59:59.026468 kubelet[2697]: E0515 12:59:59.026418 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 12:59:59.472747 systemd[1]: Started sshd@73-172.236.126.108:22-139.178.89.65:50322.service - OpenSSH per-connection server daemon (139.178.89.65:50322). May 15 12:59:59.821823 sshd[8003]: Accepted publickey for core from 139.178.89.65 port 50322 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 12:59:59.823485 sshd-session[8003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 12:59:59.828932 systemd-logind[1531]: New session 72 of user core. May 15 12:59:59.836699 systemd[1]: Started session-72.scope - Session 72 of User core. May 15 13:00:00.138897 sshd[8005]: Connection closed by 139.178.89.65 port 50322 May 15 13:00:00.139940 sshd-session[8003]: pam_unix(sshd:session): session closed for user core May 15 13:00:00.144680 systemd[1]: sshd@73-172.236.126.108:22-139.178.89.65:50322.service: Deactivated successfully. May 15 13:00:00.149143 systemd[1]: session-72.scope: Deactivated successfully. May 15 13:00:00.153267 systemd-logind[1531]: Session 72 logged out. Waiting for processes to exit. May 15 13:00:00.155151 systemd-logind[1531]: Removed session 72. May 15 13:00:05.201741 systemd[1]: Started sshd@74-172.236.126.108:22-139.178.89.65:50326.service - OpenSSH per-connection server daemon (139.178.89.65:50326). May 15 13:00:05.544773 sshd[8019]: Accepted publickey for core from 139.178.89.65 port 50326 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:00:05.546710 sshd-session[8019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:00:05.551838 systemd-logind[1531]: New session 73 of user core. May 15 13:00:05.555674 systemd[1]: Started session-73.scope - Session 73 of User core. May 15 13:00:05.665823 update_engine[1532]: I20250515 13:00:05.665733 1532 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 13:00:05.666215 update_engine[1532]: I20250515 13:00:05.666060 1532 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 13:00:05.666386 update_engine[1532]: I20250515 13:00:05.666354 1532 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 13:00:05.667317 update_engine[1532]: E20250515 13:00:05.667281 1532 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 13:00:05.667391 update_engine[1532]: I20250515 13:00:05.667330 1532 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 13:00:05.667391 update_engine[1532]: I20250515 13:00:05.667339 1532 omaha_request_action.cc:617] Omaha request response: May 15 13:00:05.667444 update_engine[1532]: E20250515 13:00:05.667423 1532 omaha_request_action.cc:636] Omaha request network transfer failed. May 15 13:00:05.667469 update_engine[1532]: I20250515 13:00:05.667440 1532 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 15 13:00:05.667469 update_engine[1532]: I20250515 13:00:05.667446 1532 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 13:00:05.667469 update_engine[1532]: I20250515 13:00:05.667451 1532 update_attempter.cc:306] Processing Done. May 15 13:00:05.667469 update_engine[1532]: E20250515 13:00:05.667464 1532 update_attempter.cc:619] Update failed. May 15 13:00:05.667784 update_engine[1532]: I20250515 13:00:05.667470 1532 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 15 13:00:05.667784 update_engine[1532]: I20250515 13:00:05.667476 1532 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 15 13:00:05.667784 update_engine[1532]: I20250515 13:00:05.667481 1532 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 15 13:00:05.667784 update_engine[1532]: I20250515 13:00:05.667547 1532 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 13:00:05.667784 update_engine[1532]: I20250515 13:00:05.667584 1532 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 13:00:05.667784 update_engine[1532]: I20250515 13:00:05.667590 1532 omaha_request_action.cc:272] Request: May 15 13:00:05.667784 update_engine[1532]: May 15 13:00:05.667784 update_engine[1532]: May 15 13:00:05.667784 update_engine[1532]: May 15 13:00:05.667784 update_engine[1532]: May 15 13:00:05.667784 update_engine[1532]: May 15 13:00:05.667784 update_engine[1532]: May 15 13:00:05.667784 update_engine[1532]: I20250515 13:00:05.667596 1532 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 13:00:05.668054 update_engine[1532]: I20250515 13:00:05.667915 1532 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 13:00:05.668080 locksmithd[1568]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 15 13:00:05.668378 update_engine[1532]: I20250515 13:00:05.668131 1532 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 13:00:05.669229 update_engine[1532]: E20250515 13:00:05.669199 1532 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 13:00:05.669281 update_engine[1532]: I20250515 13:00:05.669258 1532 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 13:00:05.669281 update_engine[1532]: I20250515 13:00:05.669270 1532 omaha_request_action.cc:617] Omaha request response: May 15 13:00:05.669281 update_engine[1532]: I20250515 13:00:05.669276 1532 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 13:00:05.669348 update_engine[1532]: I20250515 13:00:05.669281 1532 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 13:00:05.669348 update_engine[1532]: I20250515 13:00:05.669287 1532 update_attempter.cc:306] Processing Done. May 15 13:00:05.669348 update_engine[1532]: I20250515 13:00:05.669293 1532 update_attempter.cc:310] Error event sent. May 15 13:00:05.669348 update_engine[1532]: I20250515 13:00:05.669301 1532 update_check_scheduler.cc:74] Next update check in 41m51s May 15 13:00:05.669698 locksmithd[1568]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 15 13:00:05.852916 sshd[8021]: Connection closed by 139.178.89.65 port 50326 May 15 13:00:05.853583 sshd-session[8019]: pam_unix(sshd:session): session closed for user core May 15 13:00:05.857416 systemd-logind[1531]: Session 73 logged out. Waiting for processes to exit. May 15 13:00:05.858173 systemd[1]: sshd@74-172.236.126.108:22-139.178.89.65:50326.service: Deactivated successfully. May 15 13:00:05.860102 systemd[1]: session-73.scope: Deactivated successfully. May 15 13:00:05.862023 systemd-logind[1531]: Removed session 73. May 15 13:00:10.913516 systemd[1]: Started sshd@75-172.236.126.108:22-139.178.89.65:33344.service - OpenSSH per-connection server daemon (139.178.89.65:33344). May 15 13:00:11.255001 sshd[8049]: Accepted publickey for core from 139.178.89.65 port 33344 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:00:11.256975 sshd-session[8049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:00:11.262413 systemd-logind[1531]: New session 74 of user core. May 15 13:00:11.265746 systemd[1]: Started session-74.scope - Session 74 of User core. May 15 13:00:11.555225 sshd[8051]: Connection closed by 139.178.89.65 port 33344 May 15 13:00:11.555670 sshd-session[8049]: pam_unix(sshd:session): session closed for user core May 15 13:00:11.560185 systemd-logind[1531]: Session 74 logged out. Waiting for processes to exit. May 15 13:00:11.560970 systemd[1]: sshd@75-172.236.126.108:22-139.178.89.65:33344.service: Deactivated successfully. May 15 13:00:11.563449 systemd[1]: session-74.scope: Deactivated successfully. May 15 13:00:11.564708 systemd-logind[1531]: Removed session 74. May 15 13:00:15.495391 containerd[1555]: time="2025-05-15T13:00:15.495353187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"5f20fddb005aa4a48faa1df38eff4dd29e0c06a9b72381fd01d09a5d711c85be\" pid:8074 exited_at:{seconds:1747314015 nanos:494984373}" May 15 13:00:16.466142 containerd[1555]: time="2025-05-15T13:00:16.466095095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"1518a748c05788faa651a12c8c833c8950cfa0edf000ee23cf614f809ef5e5c1\" pid:8097 exited_at:{seconds:1747314016 nanos:465910843}" May 15 13:00:16.621886 systemd[1]: Started sshd@76-172.236.126.108:22-139.178.89.65:46004.service - OpenSSH per-connection server daemon (139.178.89.65:46004). May 15 13:00:16.971101 sshd[8107]: Accepted publickey for core from 139.178.89.65 port 46004 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:00:16.972982 sshd-session[8107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:00:16.977691 systemd-logind[1531]: New session 75 of user core. May 15 13:00:16.982697 systemd[1]: Started session-75.scope - Session 75 of User core. May 15 13:00:17.283831 sshd[8109]: Connection closed by 139.178.89.65 port 46004 May 15 13:00:17.284523 sshd-session[8107]: pam_unix(sshd:session): session closed for user core May 15 13:00:17.289405 systemd[1]: sshd@76-172.236.126.108:22-139.178.89.65:46004.service: Deactivated successfully. May 15 13:00:17.292114 systemd[1]: session-75.scope: Deactivated successfully. May 15 13:00:17.293090 systemd-logind[1531]: Session 75 logged out. Waiting for processes to exit. May 15 13:00:17.295452 systemd-logind[1531]: Removed session 75. May 15 13:00:22.027339 kubelet[2697]: E0515 13:00:22.026596 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 13:00:22.344506 systemd[1]: Started sshd@77-172.236.126.108:22-139.178.89.65:46008.service - OpenSSH per-connection server daemon (139.178.89.65:46008). May 15 13:00:22.682402 sshd[8121]: Accepted publickey for core from 139.178.89.65 port 46008 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:00:22.684550 sshd-session[8121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:00:22.690085 systemd-logind[1531]: New session 76 of user core. May 15 13:00:22.695753 systemd[1]: Started session-76.scope - Session 76 of User core. May 15 13:00:22.986996 sshd[8123]: Connection closed by 139.178.89.65 port 46008 May 15 13:00:22.988178 sshd-session[8121]: pam_unix(sshd:session): session closed for user core May 15 13:00:22.993547 systemd[1]: sshd@77-172.236.126.108:22-139.178.89.65:46008.service: Deactivated successfully. May 15 13:00:22.996449 systemd[1]: session-76.scope: Deactivated successfully. May 15 13:00:22.997412 systemd-logind[1531]: Session 76 logged out. Waiting for processes to exit. May 15 13:00:22.998944 systemd-logind[1531]: Removed session 76. May 15 13:00:28.052957 systemd[1]: Started sshd@78-172.236.126.108:22-139.178.89.65:50212.service - OpenSSH per-connection server daemon (139.178.89.65:50212). May 15 13:00:28.412667 sshd[8137]: Accepted publickey for core from 139.178.89.65 port 50212 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:00:28.414586 sshd-session[8137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:00:28.420263 systemd-logind[1531]: New session 77 of user core. May 15 13:00:28.429730 systemd[1]: Started session-77.scope - Session 77 of User core. May 15 13:00:28.723859 sshd[8139]: Connection closed by 139.178.89.65 port 50212 May 15 13:00:28.724806 sshd-session[8137]: pam_unix(sshd:session): session closed for user core May 15 13:00:28.729277 systemd[1]: sshd@78-172.236.126.108:22-139.178.89.65:50212.service: Deactivated successfully. May 15 13:00:28.731421 systemd[1]: session-77.scope: Deactivated successfully. May 15 13:00:28.732490 systemd-logind[1531]: Session 77 logged out. Waiting for processes to exit. May 15 13:00:28.734428 systemd-logind[1531]: Removed session 77. May 15 13:00:33.785535 systemd[1]: Started sshd@79-172.236.126.108:22-139.178.89.65:50216.service - OpenSSH per-connection server daemon (139.178.89.65:50216). May 15 13:00:34.135908 sshd[8153]: Accepted publickey for core from 139.178.89.65 port 50216 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:00:34.137381 sshd-session[8153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:00:34.142908 systemd-logind[1531]: New session 78 of user core. May 15 13:00:34.150709 systemd[1]: Started session-78.scope - Session 78 of User core. May 15 13:00:34.447799 sshd[8155]: Connection closed by 139.178.89.65 port 50216 May 15 13:00:34.448618 sshd-session[8153]: pam_unix(sshd:session): session closed for user core May 15 13:00:34.451643 systemd[1]: sshd@79-172.236.126.108:22-139.178.89.65:50216.service: Deactivated successfully. May 15 13:00:34.453636 systemd[1]: session-78.scope: Deactivated successfully. May 15 13:00:34.454890 systemd-logind[1531]: Session 78 logged out. Waiting for processes to exit. May 15 13:00:34.457288 systemd-logind[1531]: Removed session 78. May 15 13:00:39.510673 systemd[1]: Started sshd@80-172.236.126.108:22-139.178.89.65:37370.service - OpenSSH per-connection server daemon (139.178.89.65:37370). May 15 13:00:39.851221 sshd[8167]: Accepted publickey for core from 139.178.89.65 port 37370 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:00:39.853389 sshd-session[8167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:00:39.859059 systemd-logind[1531]: New session 79 of user core. May 15 13:00:39.863746 systemd[1]: Started session-79.scope - Session 79 of User core. May 15 13:00:40.161891 sshd[8169]: Connection closed by 139.178.89.65 port 37370 May 15 13:00:40.163273 sshd-session[8167]: pam_unix(sshd:session): session closed for user core May 15 13:00:40.170529 systemd[1]: sshd@80-172.236.126.108:22-139.178.89.65:37370.service: Deactivated successfully. May 15 13:00:40.173341 systemd[1]: session-79.scope: Deactivated successfully. May 15 13:00:40.175028 systemd-logind[1531]: Session 79 logged out. Waiting for processes to exit. May 15 13:00:40.177524 systemd-logind[1531]: Removed session 79. May 15 13:00:43.026430 kubelet[2697]: E0515 13:00:43.026370 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 13:00:44.829452 containerd[1555]: time="2025-05-15T13:00:44.829331968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"e2967ba7296187d661821589a908014d53125f31f6511742529a23176ef13eac\" pid:8192 exited_at:{seconds:1747314044 nanos:828945815}" May 15 13:00:45.231119 systemd[1]: Started sshd@81-172.236.126.108:22-139.178.89.65:37372.service - OpenSSH per-connection server daemon (139.178.89.65:37372). May 15 13:00:45.498699 containerd[1555]: time="2025-05-15T13:00:45.498259119Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"361c8cc4195f0e2928b893181f80e8813d927f29f6aa00ace56627c026deb5d0\" pid:8216 exited_at:{seconds:1747314045 nanos:497896636}" May 15 13:00:45.581006 sshd[8202]: Accepted publickey for core from 139.178.89.65 port 37372 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:00:45.582718 sshd-session[8202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:00:45.588638 systemd-logind[1531]: New session 80 of user core. May 15 13:00:45.596695 systemd[1]: Started session-80.scope - Session 80 of User core. May 15 13:00:45.892951 sshd[8228]: Connection closed by 139.178.89.65 port 37372 May 15 13:00:45.893771 sshd-session[8202]: pam_unix(sshd:session): session closed for user core May 15 13:00:45.898437 systemd-logind[1531]: Session 80 logged out. Waiting for processes to exit. May 15 13:00:45.899821 systemd[1]: sshd@81-172.236.126.108:22-139.178.89.65:37372.service: Deactivated successfully. May 15 13:00:45.902542 systemd[1]: session-80.scope: Deactivated successfully. May 15 13:00:45.904765 systemd-logind[1531]: Removed session 80. May 15 13:00:46.027017 kubelet[2697]: E0515 13:00:46.026775 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 13:00:46.473022 containerd[1555]: time="2025-05-15T13:00:46.472939462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"600ca98814d3eea1ba71dbd23bc5e95b7f2dff167b1ab97a5f5c42eefcae636d\" pid:8251 exited_at:{seconds:1747314046 nanos:472542319}" May 15 13:00:50.951864 systemd[1]: Started sshd@82-172.236.126.108:22-139.178.89.65:33938.service - OpenSSH per-connection server daemon (139.178.89.65:33938). May 15 13:00:51.294257 sshd[8261]: Accepted publickey for core from 139.178.89.65 port 33938 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:00:51.296382 sshd-session[8261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:00:51.303150 systemd-logind[1531]: New session 81 of user core. May 15 13:00:51.305830 systemd[1]: Started session-81.scope - Session 81 of User core. May 15 13:00:51.596790 sshd[8263]: Connection closed by 139.178.89.65 port 33938 May 15 13:00:51.597643 sshd-session[8261]: pam_unix(sshd:session): session closed for user core May 15 13:00:51.602738 systemd[1]: sshd@82-172.236.126.108:22-139.178.89.65:33938.service: Deactivated successfully. May 15 13:00:51.605539 systemd[1]: session-81.scope: Deactivated successfully. May 15 13:00:51.606728 systemd-logind[1531]: Session 81 logged out. Waiting for processes to exit. May 15 13:00:51.608639 systemd-logind[1531]: Removed session 81. May 15 13:00:52.027589 kubelet[2697]: E0515 13:00:52.027028 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 13:00:54.026744 kubelet[2697]: E0515 13:00:54.025919 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 13:00:56.664744 systemd[1]: Started sshd@83-172.236.126.108:22-139.178.89.65:46368.service - OpenSSH per-connection server daemon (139.178.89.65:46368). May 15 13:00:57.009941 sshd[8275]: Accepted publickey for core from 139.178.89.65 port 46368 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:00:57.012180 sshd-session[8275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:00:57.017274 systemd-logind[1531]: New session 82 of user core. May 15 13:00:57.024863 systemd[1]: Started session-82.scope - Session 82 of User core. May 15 13:00:57.327386 sshd[8277]: Connection closed by 139.178.89.65 port 46368 May 15 13:00:57.328000 sshd-session[8275]: pam_unix(sshd:session): session closed for user core May 15 13:00:57.331903 systemd-logind[1531]: Session 82 logged out. Waiting for processes to exit. May 15 13:00:57.332635 systemd[1]: sshd@83-172.236.126.108:22-139.178.89.65:46368.service: Deactivated successfully. May 15 13:00:57.334909 systemd[1]: session-82.scope: Deactivated successfully. May 15 13:00:57.337217 systemd-logind[1531]: Removed session 82. May 15 13:01:02.391848 systemd[1]: Started sshd@84-172.236.126.108:22-139.178.89.65:46382.service - OpenSSH per-connection server daemon (139.178.89.65:46382). May 15 13:01:02.743153 sshd[8291]: Accepted publickey for core from 139.178.89.65 port 46382 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:01:02.744806 sshd-session[8291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:01:02.749419 systemd-logind[1531]: New session 83 of user core. May 15 13:01:02.759691 systemd[1]: Started session-83.scope - Session 83 of User core. May 15 13:01:03.068417 sshd[8293]: Connection closed by 139.178.89.65 port 46382 May 15 13:01:03.069008 sshd-session[8291]: pam_unix(sshd:session): session closed for user core May 15 13:01:03.073591 systemd[1]: sshd@84-172.236.126.108:22-139.178.89.65:46382.service: Deactivated successfully. May 15 13:01:03.076632 systemd[1]: session-83.scope: Deactivated successfully. May 15 13:01:03.080221 systemd-logind[1531]: Session 83 logged out. Waiting for processes to exit. May 15 13:01:03.081809 systemd-logind[1531]: Removed session 83. May 15 13:01:06.026589 kubelet[2697]: E0515 13:01:06.026059 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 13:01:08.133348 systemd[1]: Started sshd@85-172.236.126.108:22-139.178.89.65:53800.service - OpenSSH per-connection server daemon (139.178.89.65:53800). May 15 13:01:08.481230 sshd[8305]: Accepted publickey for core from 139.178.89.65 port 53800 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:01:08.482634 sshd-session[8305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:01:08.487191 systemd-logind[1531]: New session 84 of user core. May 15 13:01:08.492718 systemd[1]: Started session-84.scope - Session 84 of User core. May 15 13:01:08.780762 sshd[8307]: Connection closed by 139.178.89.65 port 53800 May 15 13:01:08.781574 sshd-session[8305]: pam_unix(sshd:session): session closed for user core May 15 13:01:08.785488 systemd[1]: sshd@85-172.236.126.108:22-139.178.89.65:53800.service: Deactivated successfully. May 15 13:01:08.788336 systemd[1]: session-84.scope: Deactivated successfully. May 15 13:01:08.792879 systemd-logind[1531]: Session 84 logged out. Waiting for processes to exit. May 15 13:01:08.794979 systemd-logind[1531]: Removed session 84. May 15 13:01:13.842587 systemd[1]: Started sshd@86-172.236.126.108:22-139.178.89.65:53808.service - OpenSSH per-connection server daemon (139.178.89.65:53808). May 15 13:01:14.026842 kubelet[2697]: E0515 13:01:14.026141 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17" May 15 13:01:14.182333 sshd[8319]: Accepted publickey for core from 139.178.89.65 port 53808 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:01:14.183921 sshd-session[8319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:01:14.189463 systemd-logind[1531]: New session 85 of user core. May 15 13:01:14.196737 systemd[1]: Started session-85.scope - Session 85 of User core. May 15 13:01:14.484267 sshd[8321]: Connection closed by 139.178.89.65 port 53808 May 15 13:01:14.484851 sshd-session[8319]: pam_unix(sshd:session): session closed for user core May 15 13:01:14.489429 systemd-logind[1531]: Session 85 logged out. Waiting for processes to exit. May 15 13:01:14.490311 systemd[1]: sshd@86-172.236.126.108:22-139.178.89.65:53808.service: Deactivated successfully. May 15 13:01:14.493001 systemd[1]: session-85.scope: Deactivated successfully. May 15 13:01:14.494442 systemd-logind[1531]: Removed session 85. May 15 13:01:15.488347 containerd[1555]: time="2025-05-15T13:01:15.488298888Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57548905b14b9ba92ad83db92d14e98f883e61d978353b506ccdd5381450da9c\" id:\"8a07aec5699f2e6c6f3fc58b46be45f9d59a4a27e31b49e766ba83375b0cd0e9\" pid:8344 exited_at:{seconds:1747314075 nanos:487974035}" May 15 13:01:16.468933 containerd[1555]: time="2025-05-15T13:01:16.468898278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f0601ec741eaeecedefdf673391ade56160fb087508c974065f179170842d0\" id:\"625dab179db601acd7976666fda0dd66d5d012019c4c3e9e6bc2819a7d0b3992\" pid:8369 exited_at:{seconds:1747314076 nanos:468430074}" May 15 13:01:19.543419 systemd[1]: Started sshd@87-172.236.126.108:22-139.178.89.65:45492.service - OpenSSH per-connection server daemon (139.178.89.65:45492). May 15 13:01:19.875668 sshd[8379]: Accepted publickey for core from 139.178.89.65 port 45492 ssh2: RSA SHA256:KWtLq1eXEQfwrLR1UpnzjQwMCl911bHb8cMo6sBQMFE May 15 13:01:19.877054 sshd-session[8379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 13:01:19.882641 systemd-logind[1531]: New session 86 of user core. May 15 13:01:19.885702 systemd[1]: Started session-86.scope - Session 86 of User core. May 15 13:01:20.181806 sshd[8381]: Connection closed by 139.178.89.65 port 45492 May 15 13:01:20.182462 sshd-session[8379]: pam_unix(sshd:session): session closed for user core May 15 13:01:20.187265 systemd[1]: sshd@87-172.236.126.108:22-139.178.89.65:45492.service: Deactivated successfully. May 15 13:01:20.189884 systemd[1]: session-86.scope: Deactivated successfully. May 15 13:01:20.191016 systemd-logind[1531]: Session 86 logged out. Waiting for processes to exit. May 15 13:01:20.193572 systemd-logind[1531]: Removed session 86. May 15 13:01:24.026277 kubelet[2697]: E0515 13:01:24.026116 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.15 172.232.0.18 172.232.0.17"