May 14 18:03:33.866196 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 16:37:27 -00 2025 May 14 18:03:33.866234 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:03:33.866244 kernel: BIOS-provided physical RAM map: May 14 18:03:33.866255 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 14 18:03:33.866260 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 14 18:03:33.866266 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 14 18:03:33.866273 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 14 18:03:33.866279 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 14 18:03:33.866285 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 14 18:03:33.866291 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 14 18:03:33.866297 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 18:03:33.866303 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 14 18:03:33.866312 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 14 18:03:33.866319 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 18:03:33.866326 kernel: NX (Execute Disable) protection: active May 14 18:03:33.866333 kernel: APIC: Static calls initialized May 14 18:03:33.866339 kernel: SMBIOS 2.8 present. May 14 18:03:33.866348 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 14 18:03:33.866354 kernel: DMI: Memory slots populated: 1/1 May 14 18:03:33.866361 kernel: Hypervisor detected: KVM May 14 18:03:33.866367 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 18:03:33.866373 kernel: kvm-clock: using sched offset of 5832415720 cycles May 14 18:03:33.866380 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 18:03:33.866387 kernel: tsc: Detected 2000.000 MHz processor May 14 18:03:33.866394 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 18:03:33.866402 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 18:03:33.866408 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 14 18:03:33.866417 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 14 18:03:33.866424 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 18:03:33.866431 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 14 18:03:33.866437 kernel: Using GB pages for direct mapping May 14 18:03:33.866444 kernel: ACPI: Early table checksum verification disabled May 14 18:03:33.866451 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 14 18:03:33.866458 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:03:33.866465 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:03:33.866472 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:03:33.866481 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 14 18:03:33.866487 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:03:33.866494 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:03:33.866501 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:03:33.866511 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:03:33.866519 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 14 18:03:33.866528 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 14 18:03:33.866535 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 14 18:03:33.866542 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 14 18:03:33.866549 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 14 18:03:33.866556 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 14 18:03:33.866563 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 14 18:03:33.866570 kernel: No NUMA configuration found May 14 18:03:33.866577 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 14 18:03:33.866586 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] May 14 18:03:33.866593 kernel: Zone ranges: May 14 18:03:33.866601 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 18:03:33.866608 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 14 18:03:33.866615 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 14 18:03:33.866622 kernel: Device empty May 14 18:03:33.866629 kernel: Movable zone start for each node May 14 18:03:33.866636 kernel: Early memory node ranges May 14 18:03:33.866643 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 14 18:03:33.866650 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 14 18:03:33.866659 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 14 18:03:33.866666 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 14 18:03:33.866673 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 18:03:33.866680 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 14 18:03:33.866687 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 14 18:03:33.866694 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 18:03:33.866701 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 18:03:33.866709 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 18:03:33.866716 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 18:03:33.866725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 18:03:33.866732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 18:03:33.866739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 18:03:33.866746 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 18:03:33.866753 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 18:03:33.866760 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 18:03:33.866767 kernel: TSC deadline timer available May 14 18:03:33.866774 kernel: CPU topo: Max. logical packages: 1 May 14 18:03:33.866781 kernel: CPU topo: Max. logical dies: 1 May 14 18:03:33.866791 kernel: CPU topo: Max. dies per package: 1 May 14 18:03:33.866798 kernel: CPU topo: Max. threads per core: 1 May 14 18:03:33.866805 kernel: CPU topo: Num. cores per package: 2 May 14 18:03:33.866811 kernel: CPU topo: Num. threads per package: 2 May 14 18:03:33.866819 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 14 18:03:33.866825 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 18:03:33.866832 kernel: kvm-guest: KVM setup pv remote TLB flush May 14 18:03:33.866840 kernel: kvm-guest: setup PV sched yield May 14 18:03:33.866847 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 14 18:03:33.866856 kernel: Booting paravirtualized kernel on KVM May 14 18:03:33.866863 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 18:03:33.866871 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 14 18:03:33.866878 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 14 18:03:33.866885 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 14 18:03:33.866892 kernel: pcpu-alloc: [0] 0 1 May 14 18:03:33.866899 kernel: kvm-guest: PV spinlocks enabled May 14 18:03:33.866906 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 18:03:33.866931 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:03:33.866942 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 18:03:33.866949 kernel: random: crng init done May 14 18:03:33.866956 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 18:03:33.866964 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 18:03:33.866971 kernel: Fallback order for Node 0: 0 May 14 18:03:33.866978 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 14 18:03:33.866986 kernel: Policy zone: Normal May 14 18:03:33.866993 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 18:03:33.867002 kernel: software IO TLB: area num 2. May 14 18:03:33.867009 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 18:03:33.867017 kernel: ftrace: allocating 40065 entries in 157 pages May 14 18:03:33.867024 kernel: ftrace: allocated 157 pages with 5 groups May 14 18:03:33.867031 kernel: Dynamic Preempt: voluntary May 14 18:03:33.867038 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 18:03:33.867046 kernel: rcu: RCU event tracing is enabled. May 14 18:03:33.867054 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 18:03:33.867061 kernel: Trampoline variant of Tasks RCU enabled. May 14 18:03:33.867068 kernel: Rude variant of Tasks RCU enabled. May 14 18:03:33.867078 kernel: Tracing variant of Tasks RCU enabled. May 14 18:03:33.867084 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 18:03:33.867092 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 18:03:33.867099 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:03:33.867114 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:03:33.867123 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 18:03:33.867131 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 14 18:03:33.867138 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 18:03:33.867146 kernel: Console: colour VGA+ 80x25 May 14 18:03:33.867153 kernel: printk: legacy console [tty0] enabled May 14 18:03:33.867161 kernel: printk: legacy console [ttyS0] enabled May 14 18:03:33.867170 kernel: ACPI: Core revision 20240827 May 14 18:03:33.867178 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 18:03:33.867185 kernel: APIC: Switch to symmetric I/O mode setup May 14 18:03:33.867193 kernel: x2apic enabled May 14 18:03:33.867200 kernel: APIC: Switched APIC routing to: physical x2apic May 14 18:03:33.867210 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 14 18:03:33.867218 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 14 18:03:33.867225 kernel: kvm-guest: setup PV IPIs May 14 18:03:33.867232 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 18:03:33.867240 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 14 18:03:33.867247 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) May 14 18:03:33.867255 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 18:03:33.867262 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 18:03:33.867269 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 18:03:33.867279 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 18:03:33.867287 kernel: Spectre V2 : Mitigation: Retpolines May 14 18:03:33.867294 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 14 18:03:33.867302 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 14 18:03:33.867309 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 14 18:03:33.867317 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 18:03:33.867324 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 18:03:33.867332 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 14 18:03:33.867342 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 14 18:03:33.867350 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 14 18:03:33.867357 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 18:03:33.867365 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 18:03:33.867372 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 18:03:33.867380 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 14 18:03:33.867387 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 18:03:33.867395 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 14 18:03:33.867402 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 14 18:03:33.867412 kernel: Freeing SMP alternatives memory: 32K May 14 18:03:33.867420 kernel: pid_max: default: 32768 minimum: 301 May 14 18:03:33.867427 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 18:03:33.867434 kernel: landlock: Up and running. May 14 18:03:33.867442 kernel: SELinux: Initializing. May 14 18:03:33.867449 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:03:33.867457 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:03:33.867464 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 14 18:03:33.867472 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 18:03:33.867481 kernel: ... version: 0 May 14 18:03:33.867489 kernel: ... bit width: 48 May 14 18:03:33.867496 kernel: ... generic registers: 6 May 14 18:03:33.867503 kernel: ... value mask: 0000ffffffffffff May 14 18:03:33.867511 kernel: ... max period: 00007fffffffffff May 14 18:03:33.867518 kernel: ... fixed-purpose events: 0 May 14 18:03:33.867525 kernel: ... event mask: 000000000000003f May 14 18:03:33.867533 kernel: signal: max sigframe size: 3376 May 14 18:03:33.867540 kernel: rcu: Hierarchical SRCU implementation. May 14 18:03:33.867550 kernel: rcu: Max phase no-delay instances is 400. May 14 18:03:33.867557 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 18:03:33.867565 kernel: smp: Bringing up secondary CPUs ... May 14 18:03:33.867572 kernel: smpboot: x86: Booting SMP configuration: May 14 18:03:33.867579 kernel: .... node #0, CPUs: #1 May 14 18:03:33.867587 kernel: smp: Brought up 1 node, 2 CPUs May 14 18:03:33.867594 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 14 18:03:33.867602 kernel: Memory: 3961808K/4193772K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54424K init, 2536K bss, 227288K reserved, 0K cma-reserved) May 14 18:03:33.867609 kernel: devtmpfs: initialized May 14 18:03:33.867619 kernel: x86/mm: Memory block size: 128MB May 14 18:03:33.867626 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 18:03:33.867634 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 18:03:33.867641 kernel: pinctrl core: initialized pinctrl subsystem May 14 18:03:33.867649 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 18:03:33.867656 kernel: audit: initializing netlink subsys (disabled) May 14 18:03:33.867664 kernel: audit: type=2000 audit(1747245811.310:1): state=initialized audit_enabled=0 res=1 May 14 18:03:33.867671 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 18:03:33.867679 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 18:03:33.867689 kernel: cpuidle: using governor menu May 14 18:03:33.867696 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 18:03:33.867704 kernel: dca service started, version 1.12.1 May 14 18:03:33.867711 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 14 18:03:33.867719 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 14 18:03:33.867726 kernel: PCI: Using configuration type 1 for base access May 14 18:03:33.867734 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 18:03:33.867741 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 18:03:33.867749 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 18:03:33.867758 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 18:03:33.867765 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 18:03:33.867773 kernel: ACPI: Added _OSI(Module Device) May 14 18:03:33.867780 kernel: ACPI: Added _OSI(Processor Device) May 14 18:03:33.867788 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 18:03:33.867795 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 18:03:33.867802 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 18:03:33.867810 kernel: ACPI: Interpreter enabled May 14 18:03:33.867817 kernel: ACPI: PM: (supports S0 S3 S5) May 14 18:03:33.867827 kernel: ACPI: Using IOAPIC for interrupt routing May 14 18:03:33.867834 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 18:03:33.867841 kernel: PCI: Using E820 reservations for host bridge windows May 14 18:03:33.867849 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 18:03:33.867856 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 18:03:33.871659 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 18:03:33.871809 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 18:03:33.871955 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 18:03:33.871977 kernel: PCI host bridge to bus 0000:00 May 14 18:03:33.872108 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 18:03:33.872213 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 18:03:33.872313 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 18:03:33.872410 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 14 18:03:33.872507 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 14 18:03:33.872625 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 14 18:03:33.872735 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 18:03:33.872875 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 14 18:03:33.873032 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 14 18:03:33.873146 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 14 18:03:33.873252 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 14 18:03:33.873358 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 14 18:03:33.873470 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 18:03:33.873589 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 14 18:03:33.873698 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] May 14 18:03:33.873804 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 14 18:03:33.873910 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 14 18:03:33.875078 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 14 18:03:33.875195 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] May 14 18:03:33.875311 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 14 18:03:33.875418 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 14 18:03:33.875524 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 14 18:03:33.875643 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 14 18:03:33.875752 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 18:03:33.875876 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 14 18:03:33.877557 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] May 14 18:03:33.877673 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] May 14 18:03:33.877792 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 14 18:03:33.877900 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 14 18:03:33.877931 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 18:03:33.877940 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 18:03:33.877948 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 18:03:33.878023 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 18:03:33.878035 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 18:03:33.878043 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 18:03:33.878051 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 18:03:33.878059 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 18:03:33.878067 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 18:03:33.878074 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 18:03:33.878082 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 18:03:33.878089 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 18:03:33.878097 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 18:03:33.878107 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 18:03:33.878115 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 18:03:33.878123 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 18:03:33.878131 kernel: iommu: Default domain type: Translated May 14 18:03:33.878138 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 18:03:33.878146 kernel: PCI: Using ACPI for IRQ routing May 14 18:03:33.878154 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 18:03:33.878162 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 14 18:03:33.878170 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 14 18:03:33.878291 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 18:03:33.878400 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 18:03:33.878506 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 18:03:33.878516 kernel: vgaarb: loaded May 14 18:03:33.878524 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 18:03:33.878532 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 18:03:33.878539 kernel: clocksource: Switched to clocksource kvm-clock May 14 18:03:33.878547 kernel: VFS: Disk quotas dquot_6.6.0 May 14 18:03:33.878560 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 18:03:33.878567 kernel: pnp: PnP ACPI init May 14 18:03:33.878690 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 14 18:03:33.878702 kernel: pnp: PnP ACPI: found 5 devices May 14 18:03:33.878710 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 18:03:33.878718 kernel: NET: Registered PF_INET protocol family May 14 18:03:33.878726 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 18:03:33.878734 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 18:03:33.878745 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 18:03:33.878753 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 18:03:33.878760 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 18:03:33.878768 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 18:03:33.878776 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:03:33.878784 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:03:33.878791 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 18:03:33.878799 kernel: NET: Registered PF_XDP protocol family May 14 18:03:33.878901 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 18:03:33.880378 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 18:03:33.880481 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 18:03:33.880593 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 14 18:03:33.880729 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 14 18:03:33.880828 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 14 18:03:33.880838 kernel: PCI: CLS 0 bytes, default 64 May 14 18:03:33.880846 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 14 18:03:33.880854 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 14 18:03:33.880867 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 14 18:03:33.880874 kernel: Initialise system trusted keyrings May 14 18:03:33.880883 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 18:03:33.880890 kernel: Key type asymmetric registered May 14 18:03:33.880898 kernel: Asymmetric key parser 'x509' registered May 14 18:03:33.880905 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 18:03:33.880950 kernel: io scheduler mq-deadline registered May 14 18:03:33.880958 kernel: io scheduler kyber registered May 14 18:03:33.880966 kernel: io scheduler bfq registered May 14 18:03:33.880977 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 18:03:33.880985 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 18:03:33.880993 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 18:03:33.881001 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 18:03:33.881008 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 18:03:33.881016 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 18:03:33.881023 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 18:03:33.881031 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 18:03:33.881156 kernel: rtc_cmos 00:03: RTC can wake from S4 May 14 18:03:33.881171 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 18:03:33.881272 kernel: rtc_cmos 00:03: registered as rtc0 May 14 18:03:33.881372 kernel: rtc_cmos 00:03: setting system clock to 2025-05-14T18:03:33 UTC (1747245813) May 14 18:03:33.881477 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 14 18:03:33.881487 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 18:03:33.881495 kernel: NET: Registered PF_INET6 protocol family May 14 18:03:33.881503 kernel: Segment Routing with IPv6 May 14 18:03:33.881511 kernel: In-situ OAM (IOAM) with IPv6 May 14 18:03:33.881521 kernel: NET: Registered PF_PACKET protocol family May 14 18:03:33.881529 kernel: Key type dns_resolver registered May 14 18:03:33.881537 kernel: IPI shorthand broadcast: enabled May 14 18:03:33.881544 kernel: sched_clock: Marking stable (2763004140, 223009360)->(3079845020, -93831520) May 14 18:03:33.881552 kernel: registered taskstats version 1 May 14 18:03:33.881560 kernel: Loading compiled-in X.509 certificates May 14 18:03:33.881568 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 41e2a150aa08ec2528be2394819b3db677e5f4ef' May 14 18:03:33.881575 kernel: Demotion targets for Node 0: null May 14 18:03:33.881583 kernel: Key type .fscrypt registered May 14 18:03:33.881592 kernel: Key type fscrypt-provisioning registered May 14 18:03:33.881599 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 18:03:33.881607 kernel: ima: Allocated hash algorithm: sha1 May 14 18:03:33.881614 kernel: ima: No architecture policies found May 14 18:03:33.881622 kernel: clk: Disabling unused clocks May 14 18:03:33.881629 kernel: Warning: unable to open an initial console. May 14 18:03:33.881638 kernel: Freeing unused kernel image (initmem) memory: 54424K May 14 18:03:33.881645 kernel: Write protecting the kernel read-only data: 24576k May 14 18:03:33.881653 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 14 18:03:33.881662 kernel: Run /init as init process May 14 18:03:33.881670 kernel: with arguments: May 14 18:03:33.881678 kernel: /init May 14 18:03:33.881685 kernel: with environment: May 14 18:03:33.881693 kernel: HOME=/ May 14 18:03:33.881716 kernel: TERM=linux May 14 18:03:33.881726 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 18:03:33.881735 systemd[1]: Successfully made /usr/ read-only. May 14 18:03:33.881749 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:03:33.881758 systemd[1]: Detected virtualization kvm. May 14 18:03:33.881769 systemd[1]: Detected architecture x86-64. May 14 18:03:33.881777 systemd[1]: Running in initrd. May 14 18:03:33.881785 systemd[1]: No hostname configured, using default hostname. May 14 18:03:33.881793 systemd[1]: Hostname set to . May 14 18:03:33.881802 systemd[1]: Initializing machine ID from random generator. May 14 18:03:33.881810 systemd[1]: Queued start job for default target initrd.target. May 14 18:03:33.881820 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:03:33.881829 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:03:33.881838 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 18:03:33.881846 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:03:33.881854 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 18:03:33.881863 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 18:03:33.881874 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 18:03:33.881883 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 18:03:33.881892 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:03:33.881900 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:03:33.881908 systemd[1]: Reached target paths.target - Path Units. May 14 18:03:33.881937 systemd[1]: Reached target slices.target - Slice Units. May 14 18:03:33.881946 systemd[1]: Reached target swap.target - Swaps. May 14 18:03:33.881954 systemd[1]: Reached target timers.target - Timer Units. May 14 18:03:33.881962 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:03:33.881973 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:03:33.881981 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 18:03:33.881989 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 18:03:33.881997 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:03:33.882005 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:03:33.882013 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:03:33.882024 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:03:33.882032 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 18:03:33.882040 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:03:33.882048 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 18:03:33.882057 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 18:03:33.882065 systemd[1]: Starting systemd-fsck-usr.service... May 14 18:03:33.882073 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:03:33.882081 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:03:33.882091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:03:33.882126 systemd-journald[205]: Collecting audit messages is disabled. May 14 18:03:33.882148 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 18:03:33.882159 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:03:33.882168 systemd[1]: Finished systemd-fsck-usr.service. May 14 18:03:33.882177 systemd-journald[205]: Journal started May 14 18:03:33.882198 systemd-journald[205]: Runtime Journal (/run/log/journal/2f47941161294189a4ad6041b932143d) is 8M, max 78.5M, 70.5M free. May 14 18:03:33.860204 systemd-modules-load[207]: Inserted module 'overlay' May 14 18:03:33.888052 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:03:33.891954 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:03:33.896959 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 18:03:33.900939 kernel: Bridge firewalling registered May 14 18:03:33.901083 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:03:33.901991 systemd-modules-load[207]: Inserted module 'br_netfilter' May 14 18:03:33.980071 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:03:33.982010 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:03:33.983717 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:03:33.987738 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 18:03:33.987957 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 18:03:33.992632 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:03:33.996109 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:03:33.997659 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:03:34.013741 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:03:34.017119 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:03:34.018714 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:03:34.021368 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:03:34.024574 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 18:03:34.042105 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:03:34.066579 systemd-resolved[240]: Positive Trust Anchors: May 14 18:03:34.066600 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:03:34.066627 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:03:34.069751 systemd-resolved[240]: Defaulting to hostname 'linux'. May 14 18:03:34.072064 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:03:34.073730 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:03:34.148966 kernel: SCSI subsystem initialized May 14 18:03:34.157961 kernel: Loading iSCSI transport class v2.0-870. May 14 18:03:34.168959 kernel: iscsi: registered transport (tcp) May 14 18:03:34.189605 kernel: iscsi: registered transport (qla4xxx) May 14 18:03:34.189683 kernel: QLogic iSCSI HBA Driver May 14 18:03:34.213954 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:03:34.235437 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:03:34.236578 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:03:34.286833 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 18:03:34.289247 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 18:03:34.343957 kernel: raid6: avx2x4 gen() 26160 MB/s May 14 18:03:34.361951 kernel: raid6: avx2x2 gen() 22671 MB/s May 14 18:03:34.380261 kernel: raid6: avx2x1 gen() 14436 MB/s May 14 18:03:34.380310 kernel: raid6: using algorithm avx2x4 gen() 26160 MB/s May 14 18:03:34.399268 kernel: raid6: .... xor() 3040 MB/s, rmw enabled May 14 18:03:34.399319 kernel: raid6: using avx2x2 recovery algorithm May 14 18:03:34.418947 kernel: xor: automatically using best checksumming function avx May 14 18:03:34.551163 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 18:03:34.559971 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 18:03:34.562462 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:03:34.590761 systemd-udevd[455]: Using default interface naming scheme 'v255'. May 14 18:03:34.596060 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:03:34.598901 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 18:03:34.621640 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation May 14 18:03:34.657335 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:03:34.660718 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:03:34.727750 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:03:34.733076 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 18:03:34.809956 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues May 14 18:03:34.902110 kernel: scsi host0: Virtio SCSI HBA May 14 18:03:34.902287 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 14 18:03:34.911970 kernel: cryptd: max_cpu_qlen set to 1000 May 14 18:03:34.917014 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 14 18:03:34.917037 kernel: libata version 3.00 loaded. May 14 18:03:34.930585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:03:34.930741 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:03:34.933383 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:03:34.938837 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:03:34.972202 kernel: sd 0:0:0:0: Power-on or device reset occurred May 14 18:03:34.974773 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 14 18:03:34.974947 kernel: sd 0:0:0:0: [sda] Write Protect is off May 14 18:03:34.975087 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 14 18:03:34.975216 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 14 18:03:34.975344 kernel: AES CTR mode by8 optimization enabled May 14 18:03:34.975355 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 18:03:34.975364 kernel: GPT:9289727 != 167739391 May 14 18:03:34.975377 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 18:03:34.975387 kernel: GPT:9289727 != 167739391 May 14 18:03:34.975396 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 18:03:34.975404 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 18:03:34.975413 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 14 18:03:34.973179 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:03:34.984014 kernel: ahci 0000:00:1f.2: version 3.0 May 14 18:03:35.015435 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 18:03:35.015454 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 14 18:03:35.015607 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 14 18:03:35.015762 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 18:03:35.015886 kernel: scsi host1: ahci May 14 18:03:35.016047 kernel: scsi host2: ahci May 14 18:03:35.016176 kernel: scsi host3: ahci May 14 18:03:35.016305 kernel: scsi host4: ahci May 14 18:03:35.016436 kernel: scsi host5: ahci May 14 18:03:35.016561 kernel: scsi host6: ahci May 14 18:03:35.016683 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 May 14 18:03:35.016693 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 May 14 18:03:35.016702 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 May 14 18:03:35.016711 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 May 14 18:03:35.016724 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 May 14 18:03:35.016733 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 May 14 18:03:35.088722 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 14 18:03:35.118128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:03:35.127445 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 14 18:03:35.134141 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 14 18:03:35.134734 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 14 18:03:35.143512 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 14 18:03:35.145635 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 18:03:35.160179 disk-uuid[623]: Primary Header is updated. May 14 18:03:35.160179 disk-uuid[623]: Secondary Entries is updated. May 14 18:03:35.160179 disk-uuid[623]: Secondary Header is updated. May 14 18:03:35.170945 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 18:03:35.189999 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 18:03:35.320940 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 14 18:03:35.323958 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 14 18:03:35.323988 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 18:03:35.330931 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 18:03:35.330961 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 18:03:35.332934 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 18:03:35.345535 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 18:03:35.374190 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:03:35.374795 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:03:35.376418 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:03:35.378542 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 18:03:35.403901 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 18:03:36.208725 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 18:03:36.208964 disk-uuid[624]: The operation has completed successfully. May 14 18:03:36.265950 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 18:03:36.266080 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 18:03:36.291031 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 18:03:36.307258 sh[651]: Success May 14 18:03:36.326653 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 18:03:36.326688 kernel: device-mapper: uevent: version 1.0.3 May 14 18:03:36.327414 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 18:03:36.339936 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 14 18:03:36.379552 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 18:03:36.382992 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 18:03:36.397731 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 18:03:36.408972 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 18:03:36.409002 kernel: BTRFS: device fsid dedcf745-d4ff-44ac-b61c-5ec1bad114c7 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (663) May 14 18:03:36.412940 kernel: BTRFS info (device dm-0): first mount of filesystem dedcf745-d4ff-44ac-b61c-5ec1bad114c7 May 14 18:03:36.415468 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 18:03:36.418427 kernel: BTRFS info (device dm-0): using free-space-tree May 14 18:03:36.425612 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 18:03:36.426507 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 18:03:36.427501 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 18:03:36.428187 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 18:03:36.430906 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 18:03:36.457402 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (696) May 14 18:03:36.457429 kernel: BTRFS info (device sda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:03:36.461375 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:03:36.461400 kernel: BTRFS info (device sda6): using free-space-tree May 14 18:03:36.475999 kernel: BTRFS info (device sda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:03:36.476618 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 18:03:36.479044 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 18:03:36.541015 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:03:36.545028 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:03:36.591303 systemd-networkd[833]: lo: Link UP May 14 18:03:36.592129 systemd-networkd[833]: lo: Gained carrier May 14 18:03:36.594209 systemd-networkd[833]: Enumeration completed May 14 18:03:36.594753 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:03:36.594757 systemd-networkd[833]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:03:36.597010 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:03:36.597607 systemd[1]: Reached target network.target - Network. May 14 18:03:36.600598 systemd-networkd[833]: eth0: Link UP May 14 18:03:36.600602 systemd-networkd[833]: eth0: Gained carrier May 14 18:03:36.600611 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:03:36.606273 ignition[759]: Ignition 2.21.0 May 14 18:03:36.606288 ignition[759]: Stage: fetch-offline May 14 18:03:36.606318 ignition[759]: no configs at "/usr/lib/ignition/base.d" May 14 18:03:36.606327 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 14 18:03:36.606404 ignition[759]: parsed url from cmdline: "" May 14 18:03:36.606407 ignition[759]: no config URL provided May 14 18:03:36.609425 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:03:36.606412 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:03:36.606420 ignition[759]: no config at "/usr/lib/ignition/user.ign" May 14 18:03:36.606424 ignition[759]: failed to fetch config: resource requires networking May 14 18:03:36.606615 ignition[759]: Ignition finished successfully May 14 18:03:36.613008 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 18:03:36.640207 ignition[842]: Ignition 2.21.0 May 14 18:03:36.640222 ignition[842]: Stage: fetch May 14 18:03:36.640372 ignition[842]: no configs at "/usr/lib/ignition/base.d" May 14 18:03:36.640382 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 14 18:03:36.640476 ignition[842]: parsed url from cmdline: "" May 14 18:03:36.640482 ignition[842]: no config URL provided May 14 18:03:36.640487 ignition[842]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:03:36.640494 ignition[842]: no config at "/usr/lib/ignition/user.ign" May 14 18:03:36.640521 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #1 May 14 18:03:36.640696 ignition[842]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 14 18:03:36.840896 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #2 May 14 18:03:36.841132 ignition[842]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 14 18:03:37.108985 systemd-networkd[833]: eth0: DHCPv4 address 172.236.122.223/24, gateway 172.236.122.1 acquired from 23.40.196.199 May 14 18:03:37.241248 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #3 May 14 18:03:37.334571 ignition[842]: PUT result: OK May 14 18:03:37.334623 ignition[842]: GET http://169.254.169.254/v1/user-data: attempt #1 May 14 18:03:37.469696 ignition[842]: GET result: OK May 14 18:03:37.469817 ignition[842]: parsing config with SHA512: 6c1b90653fc0d47bad103924ee960a41256ed97c305f1cfcf7aef7837c0c216e6b61855531efac92ed0dc66a12c1adaae76bfccfa6e0f361c7c82f4adcbc4084 May 14 18:03:37.473198 unknown[842]: fetched base config from "system" May 14 18:03:37.473214 unknown[842]: fetched base config from "system" May 14 18:03:37.473499 ignition[842]: fetch: fetch complete May 14 18:03:37.473219 unknown[842]: fetched user config from "akamai" May 14 18:03:37.473506 ignition[842]: fetch: fetch passed May 14 18:03:37.473543 ignition[842]: Ignition finished successfully May 14 18:03:37.476408 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 18:03:37.478381 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 18:03:37.526614 ignition[849]: Ignition 2.21.0 May 14 18:03:37.526625 ignition[849]: Stage: kargs May 14 18:03:37.526736 ignition[849]: no configs at "/usr/lib/ignition/base.d" May 14 18:03:37.526746 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 14 18:03:37.530110 ignition[849]: kargs: kargs passed May 14 18:03:37.530161 ignition[849]: Ignition finished successfully May 14 18:03:37.533227 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 18:03:37.534606 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 18:03:37.556369 ignition[856]: Ignition 2.21.0 May 14 18:03:37.556384 ignition[856]: Stage: disks May 14 18:03:37.556478 ignition[856]: no configs at "/usr/lib/ignition/base.d" May 14 18:03:37.556487 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 14 18:03:37.557020 ignition[856]: disks: disks passed May 14 18:03:37.558485 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 18:03:37.557054 ignition[856]: Ignition finished successfully May 14 18:03:37.559625 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 18:03:37.560512 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 18:03:37.561600 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:03:37.562572 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:03:37.563709 systemd[1]: Reached target basic.target - Basic System. May 14 18:03:37.565622 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 18:03:37.589759 systemd-fsck[864]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 18:03:37.591892 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 18:03:37.593678 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 18:03:37.682041 systemd-networkd[833]: eth0: Gained IPv6LL May 14 18:03:37.704946 kernel: EXT4-fs (sda9): mounted filesystem d6072e19-4548-4806-a012-87bb17c59f4c r/w with ordered data mode. Quota mode: none. May 14 18:03:37.705650 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 18:03:37.707453 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 18:03:37.709468 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:03:37.712989 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 18:03:37.714598 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 18:03:37.715686 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 18:03:37.715717 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:03:37.727955 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (872) May 14 18:03:37.730450 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 18:03:37.736408 kernel: BTRFS info (device sda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:03:37.736426 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:03:37.736437 kernel: BTRFS info (device sda6): using free-space-tree May 14 18:03:37.737019 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:03:37.740235 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 18:03:37.798727 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory May 14 18:03:37.804501 initrd-setup-root[903]: cut: /sysroot/etc/group: No such file or directory May 14 18:03:37.809877 initrd-setup-root[910]: cut: /sysroot/etc/shadow: No such file or directory May 14 18:03:37.814420 initrd-setup-root[917]: cut: /sysroot/etc/gshadow: No such file or directory May 14 18:03:37.909848 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 18:03:37.912044 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 18:03:37.913763 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 18:03:37.930592 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 18:03:37.933371 kernel: BTRFS info (device sda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:03:37.949684 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 18:03:37.961530 ignition[986]: INFO : Ignition 2.21.0 May 14 18:03:37.962480 ignition[986]: INFO : Stage: mount May 14 18:03:37.963712 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:03:37.963712 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 14 18:03:37.963712 ignition[986]: INFO : mount: mount passed May 14 18:03:37.963712 ignition[986]: INFO : Ignition finished successfully May 14 18:03:37.967761 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 18:03:37.969496 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 18:03:38.706646 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:03:38.735171 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (996) May 14 18:03:38.735249 kernel: BTRFS info (device sda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:03:38.739231 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:03:38.739272 kernel: BTRFS info (device sda6): using free-space-tree May 14 18:03:38.745295 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:03:38.775857 ignition[1013]: INFO : Ignition 2.21.0 May 14 18:03:38.775857 ignition[1013]: INFO : Stage: files May 14 18:03:38.777616 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:03:38.777616 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 14 18:03:38.777616 ignition[1013]: DEBUG : files: compiled without relabeling support, skipping May 14 18:03:38.779847 ignition[1013]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 18:03:38.779847 ignition[1013]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 18:03:38.781863 ignition[1013]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 18:03:38.781863 ignition[1013]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 18:03:38.781863 ignition[1013]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 18:03:38.780946 unknown[1013]: wrote ssh authorized keys file for user: core May 14 18:03:38.785044 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:03:38.785044 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 18:03:39.091187 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 18:03:40.962714 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:03:40.962714 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 18:03:40.965971 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 14 18:03:41.203707 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 18:03:41.256540 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 18:03:41.256540 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 18:03:41.259455 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 18:03:41.259455 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 18:03:41.259455 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 18:03:41.259455 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:03:41.259455 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:03:41.259455 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:03:41.259455 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:03:41.259455 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:03:41.259455 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:03:41.259455 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 18:03:41.295046 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 18:03:41.295046 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 18:03:41.295046 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 14 18:03:41.454498 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 18:03:41.694521 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 18:03:41.694521 ignition[1013]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 18:03:41.696672 ignition[1013]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:03:41.697758 ignition[1013]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:03:41.697758 ignition[1013]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 18:03:41.697758 ignition[1013]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 18:03:41.697758 ignition[1013]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 14 18:03:41.697758 ignition[1013]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 14 18:03:41.697758 ignition[1013]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 18:03:41.697758 ignition[1013]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 14 18:03:41.707002 ignition[1013]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 14 18:03:41.707002 ignition[1013]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 18:03:41.707002 ignition[1013]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 18:03:41.707002 ignition[1013]: INFO : files: files passed May 14 18:03:41.707002 ignition[1013]: INFO : Ignition finished successfully May 14 18:03:41.701581 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 18:03:41.705049 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 18:03:41.708833 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 18:03:41.718842 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 18:03:41.719015 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 18:03:41.723173 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:03:41.724651 initrd-setup-root-after-ignition[1042]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:03:41.724651 initrd-setup-root-after-ignition[1042]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 18:03:41.725758 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:03:41.727285 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 18:03:41.729068 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 18:03:41.776007 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 18:03:41.776139 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 18:03:41.777413 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 18:03:41.778421 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 18:03:41.779639 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 18:03:41.780361 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 18:03:41.817967 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:03:41.819847 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 18:03:41.835765 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 18:03:41.836675 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:03:41.838072 systemd[1]: Stopped target timers.target - Timer Units. May 14 18:03:41.839290 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 18:03:41.839492 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:03:41.840713 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 18:03:41.841462 systemd[1]: Stopped target basic.target - Basic System. May 14 18:03:41.842606 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 18:03:41.843640 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:03:41.844651 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 18:03:41.845827 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 18:03:41.847057 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 18:03:41.848241 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:03:41.849474 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 18:03:41.850660 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 18:03:41.851863 systemd[1]: Stopped target swap.target - Swaps. May 14 18:03:41.852969 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 18:03:41.853104 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 18:03:41.854342 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 18:03:41.855142 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:03:41.856120 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 18:03:41.856211 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:03:41.857371 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 18:03:41.857496 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 18:03:41.859034 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 18:03:41.859180 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:03:41.859850 systemd[1]: ignition-files.service: Deactivated successfully. May 14 18:03:41.859993 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 18:03:41.862994 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 18:03:41.865984 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 18:03:41.866498 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 18:03:41.866642 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:03:41.867730 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 18:03:41.867862 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:03:41.875345 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 18:03:41.875442 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 18:03:41.887263 ignition[1066]: INFO : Ignition 2.21.0 May 14 18:03:41.887263 ignition[1066]: INFO : Stage: umount May 14 18:03:41.910384 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:03:41.910384 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 14 18:03:41.910384 ignition[1066]: INFO : umount: umount passed May 14 18:03:41.910384 ignition[1066]: INFO : Ignition finished successfully May 14 18:03:41.897207 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 18:03:41.897778 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 18:03:41.897886 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 18:03:41.911215 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 18:03:41.911317 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 18:03:41.912745 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 18:03:41.912824 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 18:03:41.913440 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 18:03:41.913486 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 18:03:41.914487 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 18:03:41.914531 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 18:03:41.915486 systemd[1]: Stopped target network.target - Network. May 14 18:03:41.916412 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 18:03:41.916459 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:03:41.917487 systemd[1]: Stopped target paths.target - Path Units. May 14 18:03:41.918442 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 18:03:41.921952 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:03:41.922748 systemd[1]: Stopped target slices.target - Slice Units. May 14 18:03:41.923773 systemd[1]: Stopped target sockets.target - Socket Units. May 14 18:03:41.924768 systemd[1]: iscsid.socket: Deactivated successfully. May 14 18:03:41.924806 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:03:41.925905 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 18:03:41.925958 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:03:41.927094 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 18:03:41.927143 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 18:03:41.928171 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 18:03:41.928214 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 18:03:41.929343 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 18:03:41.929389 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 18:03:41.930732 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 18:03:41.931769 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 18:03:41.937989 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 18:03:41.938103 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 18:03:41.943345 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 18:03:41.943794 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 18:03:41.943939 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 18:03:41.946138 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 18:03:41.946799 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 18:03:41.947833 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 18:03:41.947871 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 18:03:41.949713 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 18:03:41.951695 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 18:03:41.951750 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:03:41.952332 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:03:41.952376 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:03:41.954013 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 18:03:41.954059 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 18:03:41.954759 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 18:03:41.954806 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:03:41.956619 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:03:41.959511 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:03:41.959576 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 18:03:41.972088 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 18:03:41.972219 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 18:03:41.974506 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 18:03:41.974669 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:03:41.975838 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 18:03:41.975878 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 18:03:41.976883 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 18:03:41.976974 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:03:41.978046 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 18:03:41.978091 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 18:03:41.979639 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 18:03:41.979683 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 18:03:41.980823 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 18:03:41.980869 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:03:41.984005 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 18:03:41.985025 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 18:03:41.985075 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:03:41.987195 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 18:03:41.987243 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:03:41.988774 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 18:03:41.988818 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:03:41.989967 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 18:03:41.990011 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:03:41.990762 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:03:41.990805 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:03:41.993577 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 14 18:03:41.993632 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 14 18:03:41.993674 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 18:03:41.993716 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:03:42.000947 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 18:03:42.001056 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 18:03:42.002368 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 18:03:42.004182 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 18:03:42.038280 systemd[1]: Switching root. May 14 18:03:42.074130 systemd-journald[205]: Journal stopped May 14 18:03:43.136772 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). May 14 18:03:43.136794 kernel: SELinux: policy capability network_peer_controls=1 May 14 18:03:43.136806 kernel: SELinux: policy capability open_perms=1 May 14 18:03:43.136818 kernel: SELinux: policy capability extended_socket_class=1 May 14 18:03:43.136827 kernel: SELinux: policy capability always_check_network=0 May 14 18:03:43.136836 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 18:03:43.136846 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 18:03:43.136855 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 18:03:43.136864 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 18:03:43.136873 kernel: SELinux: policy capability userspace_initial_context=0 May 14 18:03:43.136885 kernel: audit: type=1403 audit(1747245822.230:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 18:03:43.136895 systemd[1]: Successfully loaded SELinux policy in 54.489ms. May 14 18:03:43.136905 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.594ms. May 14 18:03:43.136940 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:03:43.136963 systemd[1]: Detected virtualization kvm. May 14 18:03:43.136977 systemd[1]: Detected architecture x86-64. May 14 18:03:43.136986 systemd[1]: Detected first boot. May 14 18:03:43.136996 systemd[1]: Initializing machine ID from random generator. May 14 18:03:43.137006 zram_generator::config[1110]: No configuration found. May 14 18:03:43.137016 kernel: Guest personality initialized and is inactive May 14 18:03:43.137025 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 18:03:43.137034 kernel: Initialized host personality May 14 18:03:43.137045 kernel: NET: Registered PF_VSOCK protocol family May 14 18:03:43.137055 systemd[1]: Populated /etc with preset unit settings. May 14 18:03:43.137066 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 18:03:43.137076 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 18:03:43.137085 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 18:03:43.137095 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 18:03:43.137105 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 18:03:43.137117 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 18:03:43.137127 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 18:03:43.137138 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 18:03:43.137148 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 18:03:43.137158 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 18:03:43.137168 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 18:03:43.137177 systemd[1]: Created slice user.slice - User and Session Slice. May 14 18:03:43.137189 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:03:43.137199 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:03:43.137209 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 18:03:43.137219 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 18:03:43.137232 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 18:03:43.137242 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:03:43.137252 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 18:03:43.137262 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:03:43.137274 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:03:43.137285 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 18:03:43.137295 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 18:03:43.137305 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 18:03:43.137315 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 18:03:43.137325 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:03:43.137335 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:03:43.137344 systemd[1]: Reached target slices.target - Slice Units. May 14 18:03:43.137357 systemd[1]: Reached target swap.target - Swaps. May 14 18:03:43.137367 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 18:03:43.137377 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 18:03:43.137388 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 18:03:43.137398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:03:43.137410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:03:43.137420 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:03:43.137430 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 18:03:43.137441 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 18:03:43.137451 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 18:03:43.137461 systemd[1]: Mounting media.mount - External Media Directory... May 14 18:03:43.137471 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:03:43.137481 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 18:03:43.137493 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 18:03:43.137503 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 18:03:43.137514 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 18:03:43.137524 systemd[1]: Reached target machines.target - Containers. May 14 18:03:43.137534 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 18:03:43.137545 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:03:43.137555 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:03:43.137565 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 18:03:43.137578 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:03:43.137588 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:03:43.137598 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:03:43.137608 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 18:03:43.137618 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:03:43.137628 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 18:03:43.137638 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 18:03:43.137648 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 18:03:43.137658 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 18:03:43.137671 systemd[1]: Stopped systemd-fsck-usr.service. May 14 18:03:43.137681 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:03:43.137691 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:03:43.137701 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:03:43.137711 kernel: fuse: init (API version 7.41) May 14 18:03:43.137721 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:03:43.137731 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 18:03:43.137742 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 18:03:43.137753 kernel: loop: module loaded May 14 18:03:43.137763 kernel: ACPI: bus type drm_connector registered May 14 18:03:43.137772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:03:43.137783 systemd[1]: verity-setup.service: Deactivated successfully. May 14 18:03:43.137793 systemd[1]: Stopped verity-setup.service. May 14 18:03:43.137804 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:03:43.137814 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 18:03:43.137824 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 18:03:43.137836 systemd[1]: Mounted media.mount - External Media Directory. May 14 18:03:43.137846 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 18:03:43.137876 systemd-journald[1190]: Collecting audit messages is disabled. May 14 18:03:43.137899 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 18:03:43.137909 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 18:03:43.137938 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 18:03:43.138962 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:03:43.138976 systemd-journald[1190]: Journal started May 14 18:03:43.138996 systemd-journald[1190]: Runtime Journal (/run/log/journal/bf04be4e6e564e43bf1b618444bf22b6) is 8M, max 78.5M, 70.5M free. May 14 18:03:42.779279 systemd[1]: Queued start job for default target multi-user.target. May 14 18:03:42.799756 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 14 18:03:42.800440 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 18:03:43.143496 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:03:43.144354 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 18:03:43.144678 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 18:03:43.145657 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:03:43.145989 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:03:43.147409 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:03:43.147680 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:03:43.148681 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:03:43.148880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:03:43.149970 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 18:03:43.150218 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 18:03:43.151119 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:03:43.151581 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:03:43.152873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:03:43.153910 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:03:43.155100 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 18:03:43.156505 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 18:03:43.174637 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:03:43.179034 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 18:03:43.182994 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 18:03:43.183566 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 18:03:43.183592 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:03:43.184908 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 18:03:43.196072 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 18:03:43.198277 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:03:43.200098 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 18:03:43.203212 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 18:03:43.204986 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:03:43.206822 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 18:03:43.207422 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:03:43.209258 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:03:43.214825 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 18:03:43.218731 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:03:43.224357 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 18:03:43.224995 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 18:03:43.230180 systemd-journald[1190]: Time spent on flushing to /var/log/journal/bf04be4e6e564e43bf1b618444bf22b6 is 90.423ms for 1005 entries. May 14 18:03:43.230180 systemd-journald[1190]: System Journal (/var/log/journal/bf04be4e6e564e43bf1b618444bf22b6) is 8M, max 195.6M, 187.6M free. May 14 18:03:43.333740 systemd-journald[1190]: Received client request to flush runtime journal. May 14 18:03:43.335460 kernel: loop0: detected capacity change from 0 to 146240 May 14 18:03:43.335493 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 18:03:43.335507 kernel: loop1: detected capacity change from 0 to 113872 May 14 18:03:43.249865 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 18:03:43.251335 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 18:03:43.256489 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 18:03:43.309607 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. May 14 18:03:43.309620 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. May 14 18:03:43.313679 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 18:03:43.319300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:03:43.321533 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:03:43.324783 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 18:03:43.330543 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:03:43.345114 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 18:03:43.375945 kernel: loop2: detected capacity change from 0 to 8 May 14 18:03:43.396390 kernel: loop3: detected capacity change from 0 to 210664 May 14 18:03:43.394727 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 18:03:43.396703 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:03:43.424640 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 14 18:03:43.424659 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 14 18:03:43.434523 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:03:43.449932 kernel: loop4: detected capacity change from 0 to 146240 May 14 18:03:43.465949 kernel: loop5: detected capacity change from 0 to 113872 May 14 18:03:43.483959 kernel: loop6: detected capacity change from 0 to 8 May 14 18:03:43.486967 kernel: loop7: detected capacity change from 0 to 210664 May 14 18:03:43.511540 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 14 18:03:43.512105 (sd-merge)[1261]: Merged extensions into '/usr'. May 14 18:03:43.519440 systemd[1]: Reload requested from client PID 1234 ('systemd-sysext') (unit systemd-sysext.service)... May 14 18:03:43.519526 systemd[1]: Reloading... May 14 18:03:43.608992 zram_generator::config[1287]: No configuration found. May 14 18:03:43.723538 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:03:43.793654 ldconfig[1229]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 18:03:43.805722 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 18:03:43.806196 systemd[1]: Reloading finished in 286 ms. May 14 18:03:43.821372 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 18:03:43.822630 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 18:03:43.835074 systemd[1]: Starting ensure-sysext.service... May 14 18:03:43.838025 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:03:43.860991 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 18:03:43.861299 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 18:03:43.861627 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 18:03:43.861934 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 18:03:43.862764 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 18:03:43.863078 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. May 14 18:03:43.863191 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. May 14 18:03:43.867555 systemd[1]: Reload requested from client PID 1330 ('systemctl') (unit ensure-sysext.service)... May 14 18:03:43.867660 systemd[1]: Reloading... May 14 18:03:43.870620 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:03:43.870644 systemd-tmpfiles[1332]: Skipping /boot May 14 18:03:43.895152 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:03:43.895225 systemd-tmpfiles[1332]: Skipping /boot May 14 18:03:43.955937 zram_generator::config[1374]: No configuration found. May 14 18:03:44.021797 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:03:44.092448 systemd[1]: Reloading finished in 224 ms. May 14 18:03:44.118601 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 18:03:44.132297 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:03:44.140831 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:03:44.144975 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 18:03:44.157269 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 18:03:44.164168 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:03:44.172183 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:03:44.175971 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 18:03:44.179664 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:03:44.180179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:03:44.183188 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:03:44.187257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:03:44.193175 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:03:44.194095 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:03:44.194192 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:03:44.199000 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 18:03:44.199538 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:03:44.203767 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:03:44.204121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:03:44.204373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:03:44.204669 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:03:44.204829 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:03:44.209786 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:03:44.211183 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:03:44.217261 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:03:44.217991 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:03:44.218096 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:03:44.218234 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:03:44.226886 systemd[1]: Finished ensure-sysext.service. May 14 18:03:44.234510 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 18:03:44.235492 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 18:03:44.241709 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 18:03:44.246284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:03:44.246512 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:03:44.248295 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:03:44.248502 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:03:44.251438 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:03:44.265594 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 18:03:44.267553 systemd-udevd[1411]: Using default interface naming scheme 'v255'. May 14 18:03:44.269893 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:03:44.270827 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:03:44.272565 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:03:44.272776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:03:44.276349 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:03:44.281575 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 18:03:44.308980 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 18:03:44.310901 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:03:44.319769 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 18:03:44.325767 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:03:44.327245 augenrules[1449]: No rules May 14 18:03:44.333721 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:03:44.335370 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:03:44.335691 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:03:44.514355 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 18:03:44.515065 systemd[1]: Reached target time-set.target - System Time Set. May 14 18:03:44.519984 systemd-networkd[1457]: lo: Link UP May 14 18:03:44.520237 systemd-networkd[1457]: lo: Gained carrier May 14 18:03:44.522130 systemd-networkd[1457]: Enumeration completed May 14 18:03:44.522142 systemd-timesyncd[1422]: No network connectivity, watching for changes. May 14 18:03:44.522448 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:03:44.525245 systemd-resolved[1407]: Positive Trust Anchors: May 14 18:03:44.525270 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:03:44.525297 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:03:44.525731 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 18:03:44.528121 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 18:03:44.530075 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 18:03:44.540688 systemd-resolved[1407]: Defaulting to hostname 'linux'. May 14 18:03:44.543399 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:03:44.545166 systemd[1]: Reached target network.target - Network. May 14 18:03:44.545638 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:03:44.546182 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:03:44.546737 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 18:03:44.548992 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 18:03:44.549548 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 14 18:03:44.551116 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 18:03:44.551792 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 18:03:44.553496 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 18:03:44.554081 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 18:03:44.554104 systemd[1]: Reached target paths.target - Path Units. May 14 18:03:44.555968 systemd[1]: Reached target timers.target - Timer Units. May 14 18:03:44.556954 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 18:03:44.561288 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 18:03:44.564403 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 18:03:44.565617 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 18:03:44.566652 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 18:03:44.569138 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 18:03:44.570902 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 18:03:44.572186 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 18:03:44.573721 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:03:44.574725 systemd[1]: Reached target basic.target - Basic System. May 14 18:03:44.576047 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 18:03:44.576087 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 18:03:44.577111 systemd[1]: Starting containerd.service - containerd container runtime... May 14 18:03:44.580128 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 18:03:44.582379 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 18:03:44.603707 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:03:44.603716 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:03:44.606439 systemd-networkd[1457]: eth0: Link UP May 14 18:03:44.606774 systemd-networkd[1457]: eth0: Gained carrier May 14 18:03:44.606790 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:03:44.617888 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 18:03:44.621070 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 18:03:44.624146 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 18:03:44.624700 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 18:03:44.629110 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 14 18:03:44.632887 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 18:03:44.639351 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 18:03:44.647281 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 18:03:44.649998 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 18:03:44.652704 jq[1505]: false May 14 18:03:44.660280 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 18:03:44.662750 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 18:03:44.663425 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 18:03:44.667386 systemd[1]: Starting update-engine.service - Update Engine... May 14 18:03:44.675987 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 18:03:44.683154 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 18:03:44.684702 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing passwd entry cache May 14 18:03:44.684709 oslogin_cache_refresh[1507]: Refreshing passwd entry cache May 14 18:03:44.685560 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 18:03:44.687344 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 18:03:44.687590 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 18:03:44.688333 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 18:03:44.690325 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 18:03:44.704934 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting users, quitting May 14 18:03:44.704934 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:03:44.704934 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing group entry cache May 14 18:03:44.704934 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting groups, quitting May 14 18:03:44.704934 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:03:44.703286 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 14 18:03:44.699067 oslogin_cache_refresh[1507]: Failure getting users, quitting May 14 18:03:44.699084 oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:03:44.699123 oslogin_cache_refresh[1507]: Refreshing group entry cache May 14 18:03:44.699580 oslogin_cache_refresh[1507]: Failure getting groups, quitting May 14 18:03:44.699588 oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:03:44.708282 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 14 18:03:44.714978 jq[1517]: true May 14 18:03:44.728652 extend-filesystems[1506]: Found loop4 May 14 18:03:44.728652 extend-filesystems[1506]: Found loop5 May 14 18:03:44.728652 extend-filesystems[1506]: Found loop6 May 14 18:03:44.728652 extend-filesystems[1506]: Found loop7 May 14 18:03:44.728652 extend-filesystems[1506]: Found sda May 14 18:03:44.728652 extend-filesystems[1506]: Found sda1 May 14 18:03:44.728652 extend-filesystems[1506]: Found sda2 May 14 18:03:44.728652 extend-filesystems[1506]: Found sda3 May 14 18:03:44.728652 extend-filesystems[1506]: Found usr May 14 18:03:44.728652 extend-filesystems[1506]: Found sda4 May 14 18:03:44.728652 extend-filesystems[1506]: Found sda6 May 14 18:03:44.728652 extend-filesystems[1506]: Found sda7 May 14 18:03:44.728652 extend-filesystems[1506]: Found sda9 May 14 18:03:44.738550 update_engine[1516]: I20250514 18:03:44.734601 1516 main.cc:92] Flatcar Update Engine starting May 14 18:03:44.739975 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 18:03:44.740231 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 18:03:44.753993 tar[1520]: linux-amd64/helm May 14 18:03:44.759435 (ntainerd)[1533]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 18:03:44.761860 coreos-metadata[1502]: May 14 18:03:44.760 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 14 18:03:44.771051 jq[1531]: true May 14 18:03:44.779532 systemd[1]: motdgen.service: Deactivated successfully. May 14 18:03:44.779769 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 18:03:44.789154 dbus-daemon[1503]: [system] SELinux support is enabled May 14 18:03:44.789534 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 18:03:44.792304 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 18:03:44.792325 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 18:03:44.793607 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 18:03:44.793626 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 18:03:44.821737 systemd[1]: Started update-engine.service - Update Engine. May 14 18:03:44.825880 update_engine[1516]: I20250514 18:03:44.825428 1516 update_check_scheduler.cc:74] Next update check in 10m8s May 14 18:03:44.847615 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 18:03:44.886824 bash[1565]: Updated "/home/core/.ssh/authorized_keys" May 14 18:03:44.889368 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 18:03:44.898130 systemd[1]: Starting sshkeys.service... May 14 18:03:44.926973 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 18:03:44.931112 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 18:03:44.938387 kernel: mousedev: PS/2 mouse device common for all mice May 14 18:03:44.950731 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 18:03:44.957929 kernel: ACPI: button: Power Button [PWRF] May 14 18:03:44.963523 systemd-logind[1515]: New seat seat0. May 14 18:03:45.010094 systemd[1]: Started systemd-logind.service - User Login Management. May 14 18:03:45.056118 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 14 18:03:45.060170 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 18:03:45.068352 coreos-metadata[1568]: May 14 18:03:45.068 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 14 18:03:45.075574 systemd-networkd[1457]: eth0: DHCPv4 address 172.236.122.223/24, gateway 172.236.122.1 acquired from 23.40.196.199 May 14 18:03:45.076002 dbus-daemon[1503]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1457 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 14 18:03:45.078333 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. May 14 18:03:45.080499 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 14 18:03:45.111501 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 18:03:45.120708 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 18:03:45.125441 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 18:03:45.125610 containerd[1533]: time="2025-05-14T18:03:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 18:03:45.129928 containerd[1533]: time="2025-05-14T18:03:45.129888960Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 18:03:45.160092 containerd[1533]: time="2025-05-14T18:03:45.160059140Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.5µs" May 14 18:03:45.160092 containerd[1533]: time="2025-05-14T18:03:45.160086760Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 18:03:45.160160 containerd[1533]: time="2025-05-14T18:03:45.160103100Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 18:03:45.160278 containerd[1533]: time="2025-05-14T18:03:45.160257410Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 18:03:45.160312 containerd[1533]: time="2025-05-14T18:03:45.160278450Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 18:03:45.160312 containerd[1533]: time="2025-05-14T18:03:45.160299590Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:03:45.160381 containerd[1533]: time="2025-05-14T18:03:45.160358630Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:03:45.160381 containerd[1533]: time="2025-05-14T18:03:45.160375170Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:03:45.160585 containerd[1533]: time="2025-05-14T18:03:45.160562520Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:03:45.160585 containerd[1533]: time="2025-05-14T18:03:45.160581990Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:03:45.160642 containerd[1533]: time="2025-05-14T18:03:45.160598700Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:03:45.160642 containerd[1533]: time="2025-05-14T18:03:45.160606770Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 18:03:45.160739 containerd[1533]: time="2025-05-14T18:03:45.160720290Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 18:03:45.160979 containerd[1533]: time="2025-05-14T18:03:45.160958490Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:03:45.161013 containerd[1533]: time="2025-05-14T18:03:45.160994990Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:03:45.161013 containerd[1533]: time="2025-05-14T18:03:45.161010090Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 18:03:45.161053 containerd[1533]: time="2025-05-14T18:03:45.161046380Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 18:03:45.161952 containerd[1533]: time="2025-05-14T18:03:45.161284540Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 18:03:45.161952 containerd[1533]: time="2025-05-14T18:03:45.161348340Z" level=info msg="metadata content store policy set" policy=shared May 14 18:03:45.171327 containerd[1533]: time="2025-05-14T18:03:45.171305060Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 18:03:45.171371 containerd[1533]: time="2025-05-14T18:03:45.171337830Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 18:03:45.171371 containerd[1533]: time="2025-05-14T18:03:45.171350550Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 18:03:45.171371 containerd[1533]: time="2025-05-14T18:03:45.171360010Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 18:03:45.171463 containerd[1533]: time="2025-05-14T18:03:45.171377180Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 18:03:45.171463 containerd[1533]: time="2025-05-14T18:03:45.171385880Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 18:03:45.171463 containerd[1533]: time="2025-05-14T18:03:45.171397280Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 18:03:45.171463 containerd[1533]: time="2025-05-14T18:03:45.171406480Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 18:03:45.171463 containerd[1533]: time="2025-05-14T18:03:45.171414580Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 18:03:45.171463 containerd[1533]: time="2025-05-14T18:03:45.171422920Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 18:03:45.171463 containerd[1533]: time="2025-05-14T18:03:45.171430360Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 18:03:45.171463 containerd[1533]: time="2025-05-14T18:03:45.171443630Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 18:03:45.171582 containerd[1533]: time="2025-05-14T18:03:45.171541500Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 18:03:45.171582 containerd[1533]: time="2025-05-14T18:03:45.171561930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 18:03:45.171582 containerd[1533]: time="2025-05-14T18:03:45.171574890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 18:03:45.171625 containerd[1533]: time="2025-05-14T18:03:45.171583530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 18:03:45.171625 containerd[1533]: time="2025-05-14T18:03:45.171591880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 18:03:45.171625 containerd[1533]: time="2025-05-14T18:03:45.171600270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 18:03:45.171625 containerd[1533]: time="2025-05-14T18:03:45.171609620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 18:03:45.171625 containerd[1533]: time="2025-05-14T18:03:45.171617690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 18:03:45.171709 containerd[1533]: time="2025-05-14T18:03:45.171626440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 18:03:45.171709 containerd[1533]: time="2025-05-14T18:03:45.171634980Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 18:03:45.171709 containerd[1533]: time="2025-05-14T18:03:45.171643850Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 18:03:45.171709 containerd[1533]: time="2025-05-14T18:03:45.171698330Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 18:03:45.171709 containerd[1533]: time="2025-05-14T18:03:45.171709420Z" level=info msg="Start snapshots syncer" May 14 18:03:45.171781 containerd[1533]: time="2025-05-14T18:03:45.171727720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 18:03:45.172146 containerd[1533]: time="2025-05-14T18:03:45.171891280Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 18:03:45.172146 containerd[1533]: time="2025-05-14T18:03:45.171960700Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 18:03:45.172589 containerd[1533]: time="2025-05-14T18:03:45.172569800Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 18:03:45.172725 containerd[1533]: time="2025-05-14T18:03:45.172690740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 18:03:45.172725 containerd[1533]: time="2025-05-14T18:03:45.172716840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 18:03:45.172761 containerd[1533]: time="2025-05-14T18:03:45.172726550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 18:03:45.172761 containerd[1533]: time="2025-05-14T18:03:45.172742520Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 18:03:45.172761 containerd[1533]: time="2025-05-14T18:03:45.172753440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 18:03:45.173040 containerd[1533]: time="2025-05-14T18:03:45.172762270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 18:03:45.173040 containerd[1533]: time="2025-05-14T18:03:45.172770980Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 18:03:45.173040 containerd[1533]: time="2025-05-14T18:03:45.172788590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 18:03:45.173040 containerd[1533]: time="2025-05-14T18:03:45.172797200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 18:03:45.173040 containerd[1533]: time="2025-05-14T18:03:45.172806320Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 18:03:45.174090 containerd[1533]: time="2025-05-14T18:03:45.173957430Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:03:45.174090 containerd[1533]: time="2025-05-14T18:03:45.173979090Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:03:45.174090 containerd[1533]: time="2025-05-14T18:03:45.173987490Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:03:45.174090 containerd[1533]: time="2025-05-14T18:03:45.174044520Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:03:45.174090 containerd[1533]: time="2025-05-14T18:03:45.174054240Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 18:03:45.174090 containerd[1533]: time="2025-05-14T18:03:45.174063050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 18:03:45.174090 containerd[1533]: time="2025-05-14T18:03:45.174072180Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 18:03:45.174090 containerd[1533]: time="2025-05-14T18:03:45.174086430Z" level=info msg="runtime interface created" May 14 18:03:45.174090 containerd[1533]: time="2025-05-14T18:03:45.174091210Z" level=info msg="created NRI interface" May 14 18:03:45.175840 containerd[1533]: time="2025-05-14T18:03:45.174098870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 18:03:45.175840 containerd[1533]: time="2025-05-14T18:03:45.174108190Z" level=info msg="Connect containerd service" May 14 18:03:45.175840 containerd[1533]: time="2025-05-14T18:03:45.174127040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 18:03:45.175840 containerd[1533]: time="2025-05-14T18:03:45.175599520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:03:45.218810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:03:45.228170 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 14 18:03:45.231228 dbus-daemon[1503]: [system] Successfully activated service 'org.freedesktop.hostname1' May 14 18:03:45.232058 dbus-daemon[1503]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1588 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 14 18:03:45.944510 systemd-timesyncd[1422]: Contacted time server 45.83.234.123:123 (3.flatcar.pool.ntp.org). May 14 18:03:45.944777 systemd-timesyncd[1422]: Initial clock synchronization to Wed 2025-05-14 18:03:45.944398 UTC. May 14 18:03:45.945110 systemd-resolved[1407]: Clock change detected. Flushing caches. May 14 18:03:45.958272 systemd[1]: Starting polkit.service - Authorization Manager... May 14 18:03:45.970227 locksmithd[1546]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 18:03:45.977424 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 18:03:46.063723 systemd-logind[1515]: Watching system buttons on /dev/input/event2 (Power Button) May 14 18:03:46.066156 kernel: EDAC MC: Ver: 3.0.0 May 14 18:03:46.089645 containerd[1533]: time="2025-05-14T18:03:46.089237991Z" level=info msg="Start subscribing containerd event" May 14 18:03:46.089712 containerd[1533]: time="2025-05-14T18:03:46.089657911Z" level=info msg="Start recovering state" May 14 18:03:46.091073 containerd[1533]: time="2025-05-14T18:03:46.091050021Z" level=info msg="Start event monitor" May 14 18:03:46.091108 containerd[1533]: time="2025-05-14T18:03:46.091075071Z" level=info msg="Start cni network conf syncer for default" May 14 18:03:46.091729 containerd[1533]: time="2025-05-14T18:03:46.091083581Z" level=info msg="Start streaming server" May 14 18:03:46.091729 containerd[1533]: time="2025-05-14T18:03:46.091179201Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 18:03:46.091729 containerd[1533]: time="2025-05-14T18:03:46.091187641Z" level=info msg="runtime interface starting up..." May 14 18:03:46.091729 containerd[1533]: time="2025-05-14T18:03:46.091194151Z" level=info msg="starting plugins..." May 14 18:03:46.091729 containerd[1533]: time="2025-05-14T18:03:46.091211111Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 18:03:46.101746 containerd[1533]: time="2025-05-14T18:03:46.101179931Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 18:03:46.101746 containerd[1533]: time="2025-05-14T18:03:46.101236701Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 18:03:46.103293 systemd[1]: Started containerd.service - containerd container runtime. May 14 18:03:46.103363 containerd[1533]: time="2025-05-14T18:03:46.103337131Z" level=info msg="containerd successfully booted in 0.271270s" May 14 18:03:46.319584 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:03:46.364780 polkitd[1601]: Started polkitd version 126 May 14 18:03:46.371985 polkitd[1601]: Loading rules from directory /etc/polkit-1/rules.d May 14 18:03:46.372264 polkitd[1601]: Loading rules from directory /run/polkit-1/rules.d May 14 18:03:46.372310 polkitd[1601]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 14 18:03:46.372534 polkitd[1601]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 14 18:03:46.372560 polkitd[1601]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 14 18:03:46.372594 polkitd[1601]: Loading rules from directory /usr/share/polkit-1/rules.d May 14 18:03:46.374231 polkitd[1601]: Finished loading, compiling and executing 2 rules May 14 18:03:46.374705 systemd[1]: Started polkit.service - Authorization Manager. May 14 18:03:46.375774 dbus-daemon[1503]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 14 18:03:46.376356 polkitd[1601]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 14 18:03:46.385231 sshd_keygen[1539]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 18:03:46.393320 systemd-networkd[1457]: eth0: Gained IPv6LL May 14 18:03:46.393964 systemd-resolved[1407]: System hostname changed to '172-236-122-223'. May 14 18:03:46.396163 systemd-hostnamed[1588]: Hostname set to <172-236-122-223> (transient) May 14 18:03:46.397879 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 18:03:46.398966 systemd[1]: Reached target network-online.target - Network is Online. May 14 18:03:46.404304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:03:46.406238 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 18:03:46.432312 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 18:03:46.436365 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 18:03:46.459541 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 18:03:46.463205 systemd[1]: issuegen.service: Deactivated successfully. May 14 18:03:46.463441 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 18:03:46.468359 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 18:03:46.500937 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 18:03:46.503054 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 18:03:46.505415 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 18:03:46.506110 systemd[1]: Reached target getty.target - Login Prompts. May 14 18:03:46.511024 coreos-metadata[1502]: May 14 18:03:46.510 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 14 18:03:46.531911 tar[1520]: linux-amd64/LICENSE May 14 18:03:46.532168 tar[1520]: linux-amd64/README.md May 14 18:03:46.545779 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 18:03:46.601464 coreos-metadata[1502]: May 14 18:03:46.601 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 14 18:03:46.788243 coreos-metadata[1502]: May 14 18:03:46.788 INFO Fetch successful May 14 18:03:46.788243 coreos-metadata[1502]: May 14 18:03:46.788 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 14 18:03:46.790121 coreos-metadata[1568]: May 14 18:03:46.790 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 14 18:03:46.887251 coreos-metadata[1568]: May 14 18:03:46.887 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 14 18:03:47.022228 coreos-metadata[1568]: May 14 18:03:47.022 INFO Fetch successful May 14 18:03:47.039009 update-ssh-keys[1661]: Updated "/home/core/.ssh/authorized_keys" May 14 18:03:47.039122 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 18:03:47.042224 coreos-metadata[1502]: May 14 18:03:47.042 INFO Fetch successful May 14 18:03:47.044184 systemd[1]: Finished sshkeys.service. May 14 18:03:47.142236 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 18:03:47.143393 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 18:03:47.244418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:03:47.245580 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 18:03:47.247261 systemd[1]: Startup finished in 2.840s (kernel) + 8.585s (initrd) + 4.357s (userspace) = 15.783s. May 14 18:03:47.250656 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:03:47.787794 kubelet[1688]: E0514 18:03:47.787737 1688 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:03:47.791237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:03:47.791424 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:03:47.791808 systemd[1]: kubelet.service: Consumed 786ms CPU time, 241.4M memory peak. May 14 18:03:48.698364 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 18:03:48.699705 systemd[1]: Started sshd@0-172.236.122.223:22-147.75.109.163:37952.service - OpenSSH per-connection server daemon (147.75.109.163:37952). May 14 18:03:49.054547 sshd[1701]: Accepted publickey for core from 147.75.109.163 port 37952 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:03:49.056459 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:49.062572 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 18:03:49.063635 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 18:03:49.071891 systemd-logind[1515]: New session 1 of user core. May 14 18:03:49.082767 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 18:03:49.085658 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 18:03:49.095465 (systemd)[1705]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 18:03:49.097569 systemd-logind[1515]: New session c1 of user core. May 14 18:03:49.224659 systemd[1705]: Queued start job for default target default.target. May 14 18:03:49.231382 systemd[1705]: Created slice app.slice - User Application Slice. May 14 18:03:49.231410 systemd[1705]: Reached target paths.target - Paths. May 14 18:03:49.231450 systemd[1705]: Reached target timers.target - Timers. May 14 18:03:49.232727 systemd[1705]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 18:03:49.242001 systemd[1705]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 18:03:49.242047 systemd[1705]: Reached target sockets.target - Sockets. May 14 18:03:49.242083 systemd[1705]: Reached target basic.target - Basic System. May 14 18:03:49.242122 systemd[1705]: Reached target default.target - Main User Target. May 14 18:03:49.242170 systemd[1705]: Startup finished in 139ms. May 14 18:03:49.242423 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 18:03:49.250266 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 18:03:49.521201 systemd[1]: Started sshd@1-172.236.122.223:22-147.75.109.163:37966.service - OpenSSH per-connection server daemon (147.75.109.163:37966). May 14 18:03:49.864312 sshd[1716]: Accepted publickey for core from 147.75.109.163 port 37966 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:03:49.866193 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:49.872166 systemd-logind[1515]: New session 2 of user core. May 14 18:03:49.878259 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 18:03:50.116746 sshd[1718]: Connection closed by 147.75.109.163 port 37966 May 14 18:03:50.117678 sshd-session[1716]: pam_unix(sshd:session): session closed for user core May 14 18:03:50.121964 systemd[1]: sshd@1-172.236.122.223:22-147.75.109.163:37966.service: Deactivated successfully. May 14 18:03:50.124811 systemd[1]: session-2.scope: Deactivated successfully. May 14 18:03:50.126280 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. May 14 18:03:50.128461 systemd-logind[1515]: Removed session 2. May 14 18:03:50.174341 systemd[1]: Started sshd@2-172.236.122.223:22-147.75.109.163:37982.service - OpenSSH per-connection server daemon (147.75.109.163:37982). May 14 18:03:50.510939 sshd[1724]: Accepted publickey for core from 147.75.109.163 port 37982 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:03:50.513021 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:50.518175 systemd-logind[1515]: New session 3 of user core. May 14 18:03:50.529282 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 18:03:50.753685 sshd[1726]: Connection closed by 147.75.109.163 port 37982 May 14 18:03:50.754710 sshd-session[1724]: pam_unix(sshd:session): session closed for user core May 14 18:03:50.759852 systemd[1]: sshd@2-172.236.122.223:22-147.75.109.163:37982.service: Deactivated successfully. May 14 18:03:50.762483 systemd[1]: session-3.scope: Deactivated successfully. May 14 18:03:50.766074 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. May 14 18:03:50.767575 systemd-logind[1515]: Removed session 3. May 14 18:03:50.821447 systemd[1]: Started sshd@3-172.236.122.223:22-147.75.109.163:37996.service - OpenSSH per-connection server daemon (147.75.109.163:37996). May 14 18:03:51.170199 sshd[1732]: Accepted publickey for core from 147.75.109.163 port 37996 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:03:51.172009 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:51.177533 systemd-logind[1515]: New session 4 of user core. May 14 18:03:51.185300 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 18:03:51.422626 sshd[1734]: Connection closed by 147.75.109.163 port 37996 May 14 18:03:51.423485 sshd-session[1732]: pam_unix(sshd:session): session closed for user core May 14 18:03:51.428715 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. May 14 18:03:51.429490 systemd[1]: sshd@3-172.236.122.223:22-147.75.109.163:37996.service: Deactivated successfully. May 14 18:03:51.431453 systemd[1]: session-4.scope: Deactivated successfully. May 14 18:03:51.433252 systemd-logind[1515]: Removed session 4. May 14 18:03:51.482282 systemd[1]: Started sshd@4-172.236.122.223:22-147.75.109.163:38000.service - OpenSSH per-connection server daemon (147.75.109.163:38000). May 14 18:03:51.833408 sshd[1740]: Accepted publickey for core from 147.75.109.163 port 38000 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:03:51.835294 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:51.841302 systemd-logind[1515]: New session 5 of user core. May 14 18:03:51.847292 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 18:03:52.047347 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 18:03:52.047767 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:03:52.065787 sudo[1743]: pam_unix(sudo:session): session closed for user root May 14 18:03:52.117478 sshd[1742]: Connection closed by 147.75.109.163 port 38000 May 14 18:03:52.118887 sshd-session[1740]: pam_unix(sshd:session): session closed for user core May 14 18:03:52.124425 systemd[1]: sshd@4-172.236.122.223:22-147.75.109.163:38000.service: Deactivated successfully. May 14 18:03:52.126751 systemd[1]: session-5.scope: Deactivated successfully. May 14 18:03:52.128071 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. May 14 18:03:52.129863 systemd-logind[1515]: Removed session 5. May 14 18:03:52.180773 systemd[1]: Started sshd@5-172.236.122.223:22-147.75.109.163:38014.service - OpenSSH per-connection server daemon (147.75.109.163:38014). May 14 18:03:52.534634 sshd[1749]: Accepted publickey for core from 147.75.109.163 port 38014 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:03:52.537014 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:52.544722 systemd-logind[1515]: New session 6 of user core. May 14 18:03:52.551270 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 18:03:52.731522 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 18:03:52.731867 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:03:52.738421 sudo[1753]: pam_unix(sudo:session): session closed for user root May 14 18:03:52.745366 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 18:03:52.745695 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:03:52.757162 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:03:52.805254 augenrules[1775]: No rules May 14 18:03:52.806833 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:03:52.807187 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:03:52.808386 sudo[1752]: pam_unix(sudo:session): session closed for user root May 14 18:03:52.858708 sshd[1751]: Connection closed by 147.75.109.163 port 38014 May 14 18:03:52.859338 sshd-session[1749]: pam_unix(sshd:session): session closed for user core May 14 18:03:52.864916 systemd[1]: sshd@5-172.236.122.223:22-147.75.109.163:38014.service: Deactivated successfully. May 14 18:03:52.867756 systemd[1]: session-6.scope: Deactivated successfully. May 14 18:03:52.868655 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. May 14 18:03:52.871000 systemd-logind[1515]: Removed session 6. May 14 18:03:52.930488 systemd[1]: Started sshd@6-172.236.122.223:22-147.75.109.163:38028.service - OpenSSH per-connection server daemon (147.75.109.163:38028). May 14 18:03:53.289312 sshd[1784]: Accepted publickey for core from 147.75.109.163 port 38028 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:03:53.291233 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:03:53.296603 systemd-logind[1515]: New session 7 of user core. May 14 18:03:53.299256 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 18:03:53.495037 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 18:03:53.495389 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:03:53.975646 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 18:03:53.985473 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 18:03:54.296215 dockerd[1806]: time="2025-05-14T18:03:54.296072251Z" level=info msg="Starting up" May 14 18:03:54.297014 dockerd[1806]: time="2025-05-14T18:03:54.296991671Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 18:03:54.350885 dockerd[1806]: time="2025-05-14T18:03:54.350703991Z" level=info msg="Loading containers: start." May 14 18:03:54.359155 kernel: Initializing XFRM netlink socket May 14 18:03:54.585649 systemd-networkd[1457]: docker0: Link UP May 14 18:03:54.587998 dockerd[1806]: time="2025-05-14T18:03:54.587965451Z" level=info msg="Loading containers: done." May 14 18:03:54.601017 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1789016435-merged.mount: Deactivated successfully. May 14 18:03:54.603231 dockerd[1806]: time="2025-05-14T18:03:54.603198731Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 18:03:54.603328 dockerd[1806]: time="2025-05-14T18:03:54.603254241Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 18:03:54.603354 dockerd[1806]: time="2025-05-14T18:03:54.603341581Z" level=info msg="Initializing buildkit" May 14 18:03:54.620658 dockerd[1806]: time="2025-05-14T18:03:54.620633801Z" level=info msg="Completed buildkit initialization" May 14 18:03:54.625150 dockerd[1806]: time="2025-05-14T18:03:54.625100931Z" level=info msg="Daemon has completed initialization" May 14 18:03:54.625199 dockerd[1806]: time="2025-05-14T18:03:54.625172101Z" level=info msg="API listen on /run/docker.sock" May 14 18:03:54.626722 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 18:03:55.222800 containerd[1533]: time="2025-05-14T18:03:55.222753061Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 18:03:56.068988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154805779.mount: Deactivated successfully. May 14 18:03:57.803932 containerd[1533]: time="2025-05-14T18:03:57.803878131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:57.804784 containerd[1533]: time="2025-05-14T18:03:57.804573941Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 14 18:03:57.805287 containerd[1533]: time="2025-05-14T18:03:57.805261911Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:57.807156 containerd[1533]: time="2025-05-14T18:03:57.807115441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:03:57.807930 containerd[1533]: time="2025-05-14T18:03:57.807909251Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.58510287s" May 14 18:03:57.807994 containerd[1533]: time="2025-05-14T18:03:57.807981191Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 14 18:03:57.826418 containerd[1533]: time="2025-05-14T18:03:57.826389121Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 18:03:58.041838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 18:03:58.043721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:03:58.210263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:03:58.221394 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:03:58.298874 kubelet[2083]: E0514 18:03:58.298827 2083 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:03:58.303966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:03:58.304265 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:03:58.304743 systemd[1]: kubelet.service: Consumed 213ms CPU time, 94.1M memory peak. May 14 18:04:00.179546 containerd[1533]: time="2025-05-14T18:04:00.179487361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:00.180486 containerd[1533]: time="2025-05-14T18:04:00.180454481Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 14 18:04:00.181298 containerd[1533]: time="2025-05-14T18:04:00.180899831Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:00.183170 containerd[1533]: time="2025-05-14T18:04:00.183127721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:00.184035 containerd[1533]: time="2025-05-14T18:04:00.184007941Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.35745978s" May 14 18:04:00.184080 containerd[1533]: time="2025-05-14T18:04:00.184036361Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 14 18:04:00.399799 containerd[1533]: time="2025-05-14T18:04:00.399760281Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 18:04:01.901812 containerd[1533]: time="2025-05-14T18:04:01.901749721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:01.902751 containerd[1533]: time="2025-05-14T18:04:01.902506261Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 14 18:04:01.903210 containerd[1533]: time="2025-05-14T18:04:01.903185081Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:01.905059 containerd[1533]: time="2025-05-14T18:04:01.905038211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:01.905841 containerd[1533]: time="2025-05-14T18:04:01.905820761Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.50602642s" May 14 18:04:01.905911 containerd[1533]: time="2025-05-14T18:04:01.905897251Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 14 18:04:02.014272 containerd[1533]: time="2025-05-14T18:04:02.014199231Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 18:04:03.911222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3712304769.mount: Deactivated successfully. May 14 18:04:04.551066 containerd[1533]: time="2025-05-14T18:04:04.551019951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:04.551801 containerd[1533]: time="2025-05-14T18:04:04.551691321Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 14 18:04:04.552266 containerd[1533]: time="2025-05-14T18:04:04.552240001Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:04.554043 containerd[1533]: time="2025-05-14T18:04:04.553469601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:04.554043 containerd[1533]: time="2025-05-14T18:04:04.553947141Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.53950759s" May 14 18:04:04.554043 containerd[1533]: time="2025-05-14T18:04:04.553969711Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 14 18:04:04.594014 containerd[1533]: time="2025-05-14T18:04:04.593986961Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 18:04:05.280996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201907794.mount: Deactivated successfully. May 14 18:04:06.546681 containerd[1533]: time="2025-05-14T18:04:06.546633801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:06.547768 containerd[1533]: time="2025-05-14T18:04:06.547565091Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 14 18:04:06.548464 containerd[1533]: time="2025-05-14T18:04:06.548433631Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:06.550524 containerd[1533]: time="2025-05-14T18:04:06.550485931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:06.551464 containerd[1533]: time="2025-05-14T18:04:06.551436381Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.95741943s" May 14 18:04:06.551711 containerd[1533]: time="2025-05-14T18:04:06.551477231Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 18:04:06.598525 containerd[1533]: time="2025-05-14T18:04:06.598467301Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 18:04:07.210742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992880102.mount: Deactivated successfully. May 14 18:04:07.217073 containerd[1533]: time="2025-05-14T18:04:07.216473231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:07.217073 containerd[1533]: time="2025-05-14T18:04:07.217051741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 14 18:04:07.217589 containerd[1533]: time="2025-05-14T18:04:07.217568731Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:07.219073 containerd[1533]: time="2025-05-14T18:04:07.219051891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:07.219755 containerd[1533]: time="2025-05-14T18:04:07.219707201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 621.18075ms" May 14 18:04:07.219791 containerd[1533]: time="2025-05-14T18:04:07.219757681Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 14 18:04:07.252752 containerd[1533]: time="2025-05-14T18:04:07.252727101Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 18:04:07.915631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2082453408.mount: Deactivated successfully. May 14 18:04:08.554930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 18:04:08.556835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:04:08.704450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:04:08.710419 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:04:08.746926 kubelet[2239]: E0514 18:04:08.746870 2239 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:04:08.749905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:04:08.750083 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:04:08.750506 systemd[1]: kubelet.service: Consumed 158ms CPU time, 95.8M memory peak. May 14 18:04:10.186855 containerd[1533]: time="2025-05-14T18:04:10.186805991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:10.187972 containerd[1533]: time="2025-05-14T18:04:10.187950801Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 14 18:04:10.188427 containerd[1533]: time="2025-05-14T18:04:10.188387821Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:10.191275 containerd[1533]: time="2025-05-14T18:04:10.191247021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:10.194426 containerd[1533]: time="2025-05-14T18:04:10.194015031Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.94109637s" May 14 18:04:10.194426 containerd[1533]: time="2025-05-14T18:04:10.194040661Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 14 18:04:12.208517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:04:12.209028 systemd[1]: kubelet.service: Consumed 158ms CPU time, 95.8M memory peak. May 14 18:04:12.211012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:04:12.231233 systemd[1]: Reload requested from client PID 2339 ('systemctl') (unit session-7.scope)... May 14 18:04:12.231249 systemd[1]: Reloading... May 14 18:04:12.365175 zram_generator::config[2379]: No configuration found. May 14 18:04:12.464776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:04:12.565939 systemd[1]: Reloading finished in 334 ms. May 14 18:04:12.634717 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 18:04:12.634812 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 18:04:12.635072 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:04:12.635129 systemd[1]: kubelet.service: Consumed 137ms CPU time, 83.6M memory peak. May 14 18:04:12.636616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:04:12.782555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:04:12.791550 (kubelet)[2436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:04:12.846211 kubelet[2436]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:04:12.846596 kubelet[2436]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:04:12.846639 kubelet[2436]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:04:12.846739 kubelet[2436]: I0514 18:04:12.846716 2436 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:04:13.224356 kubelet[2436]: I0514 18:04:13.224304 2436 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 18:04:13.224356 kubelet[2436]: I0514 18:04:13.224344 2436 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:04:13.224660 kubelet[2436]: I0514 18:04:13.224636 2436 server.go:927] "Client rotation is on, will bootstrap in background" May 14 18:04:13.248854 kubelet[2436]: I0514 18:04:13.247900 2436 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:04:13.249885 kubelet[2436]: E0514 18:04:13.249861 2436 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.236.122.223:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:13.268438 kubelet[2436]: I0514 18:04:13.268395 2436 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:04:13.269965 kubelet[2436]: I0514 18:04:13.269910 2436 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:04:13.270337 kubelet[2436]: I0514 18:04:13.269966 2436 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-122-223","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 18:04:13.270525 kubelet[2436]: I0514 18:04:13.270362 2436 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:04:13.270525 kubelet[2436]: I0514 18:04:13.270378 2436 container_manager_linux.go:301] "Creating device plugin manager" May 14 18:04:13.271661 kubelet[2436]: I0514 18:04:13.271639 2436 state_mem.go:36] "Initialized new in-memory state store" May 14 18:04:13.272774 kubelet[2436]: I0514 18:04:13.272752 2436 kubelet.go:400] "Attempting to sync node with API server" May 14 18:04:13.272828 kubelet[2436]: I0514 18:04:13.272777 2436 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:04:13.272828 kubelet[2436]: I0514 18:04:13.272824 2436 kubelet.go:312] "Adding apiserver pod source" May 14 18:04:13.272914 kubelet[2436]: I0514 18:04:13.272857 2436 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:04:13.282237 kubelet[2436]: I0514 18:04:13.282211 2436 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:04:13.284087 kubelet[2436]: I0514 18:04:13.284067 2436 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:04:13.284246 kubelet[2436]: W0514 18:04:13.284231 2436 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 18:04:13.285071 kubelet[2436]: I0514 18:04:13.285054 2436 server.go:1264] "Started kubelet" May 14 18:04:13.290487 kubelet[2436]: W0514 18:04:13.290346 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.236.122.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:13.290487 kubelet[2436]: E0514 18:04:13.290415 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.236.122.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:13.290587 kubelet[2436]: W0514 18:04:13.290494 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.236.122.223:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-122-223&limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:13.290587 kubelet[2436]: E0514 18:04:13.290517 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.236.122.223:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-122-223&limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:13.290587 kubelet[2436]: I0514 18:04:13.290552 2436 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:04:13.292150 kubelet[2436]: I0514 18:04:13.291395 2436 server.go:455] "Adding debug handlers to kubelet server" May 14 18:04:13.292150 kubelet[2436]: I0514 18:04:13.291877 2436 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:04:13.292251 kubelet[2436]: I0514 18:04:13.292239 2436 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:04:13.292480 kubelet[2436]: E0514 18:04:13.292384 2436 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.122.223:6443/api/v1/namespaces/default/events\": dial tcp 172.236.122.223:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-122-223.183f76d8a4540f7f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-122-223,UID:172-236-122-223,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-122-223,},FirstTimestamp:2025-05-14 18:04:13.285027711 +0000 UTC m=+0.489915861,LastTimestamp:2025-05-14 18:04:13.285027711 +0000 UTC m=+0.489915861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-122-223,}" May 14 18:04:13.293550 kubelet[2436]: I0514 18:04:13.293533 2436 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:04:13.293697 kubelet[2436]: I0514 18:04:13.293685 2436 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 18:04:13.295214 kubelet[2436]: I0514 18:04:13.294997 2436 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 18:04:13.295214 kubelet[2436]: I0514 18:04:13.295047 2436 reconciler.go:26] "Reconciler: start to sync state" May 14 18:04:13.295487 kubelet[2436]: W0514 18:04:13.295445 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.236.122.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:13.295487 kubelet[2436]: E0514 18:04:13.295481 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.236.122.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:13.297789 kubelet[2436]: E0514 18:04:13.297766 2436 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-236-122-223\" not found" May 14 18:04:13.298541 kubelet[2436]: E0514 18:04:13.298507 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.122.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-122-223?timeout=10s\": dial tcp 172.236.122.223:6443: connect: connection refused" interval="200ms" May 14 18:04:13.298901 kubelet[2436]: E0514 18:04:13.298875 2436 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:04:13.299608 kubelet[2436]: I0514 18:04:13.299558 2436 factory.go:221] Registration of the systemd container factory successfully May 14 18:04:13.299653 kubelet[2436]: I0514 18:04:13.299626 2436 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:04:13.300870 kubelet[2436]: I0514 18:04:13.300677 2436 factory.go:221] Registration of the containerd container factory successfully May 14 18:04:13.314475 kubelet[2436]: I0514 18:04:13.314442 2436 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:04:13.315637 kubelet[2436]: I0514 18:04:13.315623 2436 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:04:13.315718 kubelet[2436]: I0514 18:04:13.315707 2436 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:04:13.315778 kubelet[2436]: I0514 18:04:13.315769 2436 kubelet.go:2337] "Starting kubelet main sync loop" May 14 18:04:13.315866 kubelet[2436]: E0514 18:04:13.315845 2436 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:04:13.325461 kubelet[2436]: W0514 18:04:13.325428 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.236.122.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:13.325539 kubelet[2436]: E0514 18:04:13.325527 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.236.122.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:13.333060 kubelet[2436]: I0514 18:04:13.333047 2436 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:04:13.333363 kubelet[2436]: I0514 18:04:13.333125 2436 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:04:13.333363 kubelet[2436]: I0514 18:04:13.333164 2436 state_mem.go:36] "Initialized new in-memory state store" May 14 18:04:13.334649 kubelet[2436]: I0514 18:04:13.334636 2436 policy_none.go:49] "None policy: Start" May 14 18:04:13.335188 kubelet[2436]: I0514 18:04:13.335116 2436 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:04:13.335509 kubelet[2436]: I0514 18:04:13.335286 2436 state_mem.go:35] "Initializing new in-memory state store" May 14 18:04:13.341032 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 18:04:13.354632 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 18:04:13.358004 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 18:04:13.369078 kubelet[2436]: I0514 18:04:13.368998 2436 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:04:13.369378 kubelet[2436]: I0514 18:04:13.369229 2436 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:04:13.369378 kubelet[2436]: I0514 18:04:13.369342 2436 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:04:13.371096 kubelet[2436]: E0514 18:04:13.371034 2436 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-122-223\" not found" May 14 18:04:13.399762 kubelet[2436]: I0514 18:04:13.399534 2436 kubelet_node_status.go:73] "Attempting to register node" node="172-236-122-223" May 14 18:04:13.400043 kubelet[2436]: E0514 18:04:13.400016 2436 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.236.122.223:6443/api/v1/nodes\": dial tcp 172.236.122.223:6443: connect: connection refused" node="172-236-122-223" May 14 18:04:13.416247 kubelet[2436]: I0514 18:04:13.416204 2436 topology_manager.go:215] "Topology Admit Handler" podUID="9d8cbeba869392a07542b23bf12f4ec9" podNamespace="kube-system" podName="kube-apiserver-172-236-122-223" May 14 18:04:13.417615 kubelet[2436]: I0514 18:04:13.417595 2436 topology_manager.go:215] "Topology Admit Handler" podUID="a375a262b488e68ead9bc84f7f88c7dc" podNamespace="kube-system" podName="kube-controller-manager-172-236-122-223" May 14 18:04:13.418935 kubelet[2436]: I0514 18:04:13.418752 2436 topology_manager.go:215] "Topology Admit Handler" podUID="3c2178138f47d102fbc1aab6df61dcf7" podNamespace="kube-system" podName="kube-scheduler-172-236-122-223" May 14 18:04:13.426359 systemd[1]: Created slice kubepods-burstable-poda375a262b488e68ead9bc84f7f88c7dc.slice - libcontainer container kubepods-burstable-poda375a262b488e68ead9bc84f7f88c7dc.slice. May 14 18:04:13.455264 systemd[1]: Created slice kubepods-burstable-pod3c2178138f47d102fbc1aab6df61dcf7.slice - libcontainer container kubepods-burstable-pod3c2178138f47d102fbc1aab6df61dcf7.slice. May 14 18:04:13.462532 systemd[1]: Created slice kubepods-burstable-pod9d8cbeba869392a07542b23bf12f4ec9.slice - libcontainer container kubepods-burstable-pod9d8cbeba869392a07542b23bf12f4ec9.slice. May 14 18:04:13.499778 kubelet[2436]: E0514 18:04:13.499693 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.122.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-122-223?timeout=10s\": dial tcp 172.236.122.223:6443: connect: connection refused" interval="400ms" May 14 18:04:13.597325 kubelet[2436]: I0514 18:04:13.597271 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a375a262b488e68ead9bc84f7f88c7dc-ca-certs\") pod \"kube-controller-manager-172-236-122-223\" (UID: \"a375a262b488e68ead9bc84f7f88c7dc\") " pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:04:13.597325 kubelet[2436]: I0514 18:04:13.597315 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a375a262b488e68ead9bc84f7f88c7dc-k8s-certs\") pod \"kube-controller-manager-172-236-122-223\" (UID: \"a375a262b488e68ead9bc84f7f88c7dc\") " pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:04:13.597325 kubelet[2436]: I0514 18:04:13.597334 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a375a262b488e68ead9bc84f7f88c7dc-kubeconfig\") pod \"kube-controller-manager-172-236-122-223\" (UID: \"a375a262b488e68ead9bc84f7f88c7dc\") " pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:04:13.597473 kubelet[2436]: I0514 18:04:13.597349 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a375a262b488e68ead9bc84f7f88c7dc-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-122-223\" (UID: \"a375a262b488e68ead9bc84f7f88c7dc\") " pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:04:13.597473 kubelet[2436]: I0514 18:04:13.597384 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d8cbeba869392a07542b23bf12f4ec9-ca-certs\") pod \"kube-apiserver-172-236-122-223\" (UID: \"9d8cbeba869392a07542b23bf12f4ec9\") " pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:04:13.597473 kubelet[2436]: I0514 18:04:13.597397 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d8cbeba869392a07542b23bf12f4ec9-k8s-certs\") pod \"kube-apiserver-172-236-122-223\" (UID: \"9d8cbeba869392a07542b23bf12f4ec9\") " pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:04:13.597473 kubelet[2436]: I0514 18:04:13.597416 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a375a262b488e68ead9bc84f7f88c7dc-flexvolume-dir\") pod \"kube-controller-manager-172-236-122-223\" (UID: \"a375a262b488e68ead9bc84f7f88c7dc\") " pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:04:13.597473 kubelet[2436]: I0514 18:04:13.597430 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c2178138f47d102fbc1aab6df61dcf7-kubeconfig\") pod \"kube-scheduler-172-236-122-223\" (UID: \"3c2178138f47d102fbc1aab6df61dcf7\") " pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:04:13.597781 kubelet[2436]: I0514 18:04:13.597445 2436 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d8cbeba869392a07542b23bf12f4ec9-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-122-223\" (UID: \"9d8cbeba869392a07542b23bf12f4ec9\") " pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:04:13.602036 kubelet[2436]: I0514 18:04:13.601832 2436 kubelet_node_status.go:73] "Attempting to register node" node="172-236-122-223" May 14 18:04:13.602416 kubelet[2436]: E0514 18:04:13.602278 2436 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.236.122.223:6443/api/v1/nodes\": dial tcp 172.236.122.223:6443: connect: connection refused" node="172-236-122-223" May 14 18:04:13.752217 kubelet[2436]: E0514 18:04:13.752098 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:13.753038 containerd[1533]: time="2025-05-14T18:04:13.752990881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-122-223,Uid:a375a262b488e68ead9bc84f7f88c7dc,Namespace:kube-system,Attempt:0,}" May 14 18:04:13.760180 kubelet[2436]: E0514 18:04:13.759988 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:13.760359 containerd[1533]: time="2025-05-14T18:04:13.760336161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-122-223,Uid:3c2178138f47d102fbc1aab6df61dcf7,Namespace:kube-system,Attempt:0,}" May 14 18:04:13.764794 kubelet[2436]: E0514 18:04:13.764769 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:13.765159 containerd[1533]: time="2025-05-14T18:04:13.765007901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-122-223,Uid:9d8cbeba869392a07542b23bf12f4ec9,Namespace:kube-system,Attempt:0,}" May 14 18:04:13.901305 kubelet[2436]: E0514 18:04:13.901218 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.122.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-122-223?timeout=10s\": dial tcp 172.236.122.223:6443: connect: connection refused" interval="800ms" May 14 18:04:14.005083 kubelet[2436]: I0514 18:04:14.004736 2436 kubelet_node_status.go:73] "Attempting to register node" node="172-236-122-223" May 14 18:04:14.005374 kubelet[2436]: E0514 18:04:14.005162 2436 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.236.122.223:6443/api/v1/nodes\": dial tcp 172.236.122.223:6443: connect: connection refused" node="172-236-122-223" May 14 18:04:14.156124 kubelet[2436]: W0514 18:04:14.156062 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.236.122.223:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-122-223&limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:14.156124 kubelet[2436]: E0514 18:04:14.156125 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.236.122.223:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-122-223&limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:14.353517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531196880.mount: Deactivated successfully. May 14 18:04:14.356904 containerd[1533]: time="2025-05-14T18:04:14.356869471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:04:14.358754 containerd[1533]: time="2025-05-14T18:04:14.358558611Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 18:04:14.359094 containerd[1533]: time="2025-05-14T18:04:14.359068011Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:04:14.359603 containerd[1533]: time="2025-05-14T18:04:14.359563941Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:04:14.360491 containerd[1533]: time="2025-05-14T18:04:14.360466031Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:04:14.361379 containerd[1533]: time="2025-05-14T18:04:14.361356941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 18:04:14.362049 containerd[1533]: time="2025-05-14T18:04:14.362000411Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 18:04:14.362741 containerd[1533]: time="2025-05-14T18:04:14.362697961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:04:14.364150 containerd[1533]: time="2025-05-14T18:04:14.363188191Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 607.79522ms" May 14 18:04:14.365298 containerd[1533]: time="2025-05-14T18:04:14.365277701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 598.30619ms" May 14 18:04:14.366976 containerd[1533]: time="2025-05-14T18:04:14.366945841Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 605.22913ms" May 14 18:04:14.396196 kubelet[2436]: W0514 18:04:14.396009 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.236.122.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:14.396196 kubelet[2436]: E0514 18:04:14.396074 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.236.122.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:14.408503 containerd[1533]: time="2025-05-14T18:04:14.408442081Z" level=info msg="connecting to shim 4bc8ca91417b028bd626682f0b0e3f9343ba1ca9c8be769ba1f3acf25fa20d5f" address="unix:///run/containerd/s/f530364a9f5082a8225f2d7f73c49313e34735f34800d87d48e71009fce28335" namespace=k8s.io protocol=ttrpc version=3 May 14 18:04:14.415843 containerd[1533]: time="2025-05-14T18:04:14.415799821Z" level=info msg="connecting to shim 4f2e824a8b51bbff814c3da5224754a8d26fa9a954065d942bee4a5154e0f658" address="unix:///run/containerd/s/188f2eb49a5921e29d63403d0cd4287f000d09298619e1563a87b773bf02b9ae" namespace=k8s.io protocol=ttrpc version=3 May 14 18:04:14.416361 containerd[1533]: time="2025-05-14T18:04:14.416336271Z" level=info msg="connecting to shim 0d5b067183874fa441a105b1bd4ba2d2699ec6b6df4c8c99702262023b8c012d" address="unix:///run/containerd/s/7876762ffcb9904df2e745b06597a433bbdd32e836e234038c66a115e5ae7368" namespace=k8s.io protocol=ttrpc version=3 May 14 18:04:14.491318 systemd[1]: Started cri-containerd-4f2e824a8b51bbff814c3da5224754a8d26fa9a954065d942bee4a5154e0f658.scope - libcontainer container 4f2e824a8b51bbff814c3da5224754a8d26fa9a954065d942bee4a5154e0f658. May 14 18:04:14.495301 systemd[1]: Started cri-containerd-4bc8ca91417b028bd626682f0b0e3f9343ba1ca9c8be769ba1f3acf25fa20d5f.scope - libcontainer container 4bc8ca91417b028bd626682f0b0e3f9343ba1ca9c8be769ba1f3acf25fa20d5f. May 14 18:04:14.523320 systemd[1]: Started cri-containerd-0d5b067183874fa441a105b1bd4ba2d2699ec6b6df4c8c99702262023b8c012d.scope - libcontainer container 0d5b067183874fa441a105b1bd4ba2d2699ec6b6df4c8c99702262023b8c012d. May 14 18:04:14.598549 containerd[1533]: time="2025-05-14T18:04:14.598499481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-122-223,Uid:9d8cbeba869392a07542b23bf12f4ec9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d5b067183874fa441a105b1bd4ba2d2699ec6b6df4c8c99702262023b8c012d\"" May 14 18:04:14.605032 kubelet[2436]: E0514 18:04:14.604716 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:14.615240 containerd[1533]: time="2025-05-14T18:04:14.614668331Z" level=info msg="CreateContainer within sandbox \"0d5b067183874fa441a105b1bd4ba2d2699ec6b6df4c8c99702262023b8c012d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 18:04:14.616715 containerd[1533]: time="2025-05-14T18:04:14.616675891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-122-223,Uid:a375a262b488e68ead9bc84f7f88c7dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bc8ca91417b028bd626682f0b0e3f9343ba1ca9c8be769ba1f3acf25fa20d5f\"" May 14 18:04:14.618160 kubelet[2436]: E0514 18:04:14.618104 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:14.624109 containerd[1533]: time="2025-05-14T18:04:14.624074691Z" level=info msg="CreateContainer within sandbox \"4bc8ca91417b028bd626682f0b0e3f9343ba1ca9c8be769ba1f3acf25fa20d5f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 18:04:14.624625 containerd[1533]: time="2025-05-14T18:04:14.624598801Z" level=info msg="Container 8d04619c052d6293d37b47b7f1d784cfc10d2ed42c8cb09048963f4157415756: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:14.626496 containerd[1533]: time="2025-05-14T18:04:14.626468431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-122-223,Uid:3c2178138f47d102fbc1aab6df61dcf7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f2e824a8b51bbff814c3da5224754a8d26fa9a954065d942bee4a5154e0f658\"" May 14 18:04:14.627128 kubelet[2436]: E0514 18:04:14.627101 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:14.630256 containerd[1533]: time="2025-05-14T18:04:14.630018571Z" level=info msg="CreateContainer within sandbox \"4f2e824a8b51bbff814c3da5224754a8d26fa9a954065d942bee4a5154e0f658\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 18:04:14.633303 containerd[1533]: time="2025-05-14T18:04:14.633281751Z" level=info msg="CreateContainer within sandbox \"0d5b067183874fa441a105b1bd4ba2d2699ec6b6df4c8c99702262023b8c012d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8d04619c052d6293d37b47b7f1d784cfc10d2ed42c8cb09048963f4157415756\"" May 14 18:04:14.633883 containerd[1533]: time="2025-05-14T18:04:14.633847571Z" level=info msg="StartContainer for \"8d04619c052d6293d37b47b7f1d784cfc10d2ed42c8cb09048963f4157415756\"" May 14 18:04:14.634971 containerd[1533]: time="2025-05-14T18:04:14.634924861Z" level=info msg="Container ffbced1cf00b3ec19ebb4bbc042a7b92a9626d9627431fa50bca15228efba456: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:14.635049 containerd[1533]: time="2025-05-14T18:04:14.635030061Z" level=info msg="connecting to shim 8d04619c052d6293d37b47b7f1d784cfc10d2ed42c8cb09048963f4157415756" address="unix:///run/containerd/s/7876762ffcb9904df2e745b06597a433bbdd32e836e234038c66a115e5ae7368" protocol=ttrpc version=3 May 14 18:04:14.640317 containerd[1533]: time="2025-05-14T18:04:14.640296851Z" level=info msg="CreateContainer within sandbox \"4bc8ca91417b028bd626682f0b0e3f9343ba1ca9c8be769ba1f3acf25fa20d5f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ffbced1cf00b3ec19ebb4bbc042a7b92a9626d9627431fa50bca15228efba456\"" May 14 18:04:14.641084 containerd[1533]: time="2025-05-14T18:04:14.641065201Z" level=info msg="StartContainer for \"ffbced1cf00b3ec19ebb4bbc042a7b92a9626d9627431fa50bca15228efba456\"" May 14 18:04:14.641910 containerd[1533]: time="2025-05-14T18:04:14.641889101Z" level=info msg="connecting to shim ffbced1cf00b3ec19ebb4bbc042a7b92a9626d9627431fa50bca15228efba456" address="unix:///run/containerd/s/f530364a9f5082a8225f2d7f73c49313e34735f34800d87d48e71009fce28335" protocol=ttrpc version=3 May 14 18:04:14.642071 containerd[1533]: time="2025-05-14T18:04:14.642055821Z" level=info msg="Container fec6fcbadb466210bc475defb9b9a445e6741349b574d8f723db72e08b6b6fd8: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:14.647333 containerd[1533]: time="2025-05-14T18:04:14.647296701Z" level=info msg="CreateContainer within sandbox \"4f2e824a8b51bbff814c3da5224754a8d26fa9a954065d942bee4a5154e0f658\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fec6fcbadb466210bc475defb9b9a445e6741349b574d8f723db72e08b6b6fd8\"" May 14 18:04:14.647844 containerd[1533]: time="2025-05-14T18:04:14.647826681Z" level=info msg="StartContainer for \"fec6fcbadb466210bc475defb9b9a445e6741349b574d8f723db72e08b6b6fd8\"" May 14 18:04:14.650036 containerd[1533]: time="2025-05-14T18:04:14.649626521Z" level=info msg="connecting to shim fec6fcbadb466210bc475defb9b9a445e6741349b574d8f723db72e08b6b6fd8" address="unix:///run/containerd/s/188f2eb49a5921e29d63403d0cd4287f000d09298619e1563a87b773bf02b9ae" protocol=ttrpc version=3 May 14 18:04:14.659452 systemd[1]: Started cri-containerd-8d04619c052d6293d37b47b7f1d784cfc10d2ed42c8cb09048963f4157415756.scope - libcontainer container 8d04619c052d6293d37b47b7f1d784cfc10d2ed42c8cb09048963f4157415756. May 14 18:04:14.669289 systemd[1]: Started cri-containerd-ffbced1cf00b3ec19ebb4bbc042a7b92a9626d9627431fa50bca15228efba456.scope - libcontainer container ffbced1cf00b3ec19ebb4bbc042a7b92a9626d9627431fa50bca15228efba456. May 14 18:04:14.694268 systemd[1]: Started cri-containerd-fec6fcbadb466210bc475defb9b9a445e6741349b574d8f723db72e08b6b6fd8.scope - libcontainer container fec6fcbadb466210bc475defb9b9a445e6741349b574d8f723db72e08b6b6fd8. May 14 18:04:14.704182 kubelet[2436]: E0514 18:04:14.702709 2436 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.122.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-122-223?timeout=10s\": dial tcp 172.236.122.223:6443: connect: connection refused" interval="1.6s" May 14 18:04:14.707525 kubelet[2436]: W0514 18:04:14.707464 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.236.122.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:14.707586 kubelet[2436]: E0514 18:04:14.707530 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.236.122.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:14.774863 containerd[1533]: time="2025-05-14T18:04:14.774815871Z" level=info msg="StartContainer for \"ffbced1cf00b3ec19ebb4bbc042a7b92a9626d9627431fa50bca15228efba456\" returns successfully" May 14 18:04:14.779297 containerd[1533]: time="2025-05-14T18:04:14.779250281Z" level=info msg="StartContainer for \"8d04619c052d6293d37b47b7f1d784cfc10d2ed42c8cb09048963f4157415756\" returns successfully" May 14 18:04:14.809672 kubelet[2436]: I0514 18:04:14.809638 2436 kubelet_node_status.go:73] "Attempting to register node" node="172-236-122-223" May 14 18:04:14.810099 kubelet[2436]: E0514 18:04:14.810070 2436 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.236.122.223:6443/api/v1/nodes\": dial tcp 172.236.122.223:6443: connect: connection refused" node="172-236-122-223" May 14 18:04:14.821601 containerd[1533]: time="2025-05-14T18:04:14.821559721Z" level=info msg="StartContainer for \"fec6fcbadb466210bc475defb9b9a445e6741349b574d8f723db72e08b6b6fd8\" returns successfully" May 14 18:04:14.843529 kubelet[2436]: W0514 18:04:14.843446 2436 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.236.122.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:14.843529 kubelet[2436]: E0514 18:04:14.843507 2436 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.236.122.223:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.236.122.223:6443: connect: connection refused May 14 18:04:15.463483 kubelet[2436]: E0514 18:04:15.463445 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:15.464028 kubelet[2436]: E0514 18:04:15.463821 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:15.466161 kubelet[2436]: E0514 18:04:15.466107 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:16.413889 kubelet[2436]: I0514 18:04:16.413790 2436 kubelet_node_status.go:73] "Attempting to register node" node="172-236-122-223" May 14 18:04:16.430840 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 14 18:04:16.472976 kubelet[2436]: E0514 18:04:16.472911 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:17.122680 kubelet[2436]: E0514 18:04:17.122212 2436 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-122-223\" not found" node="172-236-122-223" May 14 18:04:17.302016 kubelet[2436]: I0514 18:04:17.301814 2436 kubelet_node_status.go:76] "Successfully registered node" node="172-236-122-223" May 14 18:04:17.445383 kubelet[2436]: I0514 18:04:17.445268 2436 apiserver.go:52] "Watching apiserver" May 14 18:04:17.495552 kubelet[2436]: I0514 18:04:17.495514 2436 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 18:04:18.572303 kubelet[2436]: E0514 18:04:18.572128 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:19.227947 systemd[1]: Reload requested from client PID 2708 ('systemctl') (unit session-7.scope)... May 14 18:04:19.227965 systemd[1]: Reloading... May 14 18:04:19.354201 zram_generator::config[2751]: No configuration found. May 14 18:04:19.450328 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:04:19.475742 kubelet[2436]: E0514 18:04:19.475697 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:19.560050 systemd[1]: Reloading finished in 331 ms. May 14 18:04:19.589890 kubelet[2436]: E0514 18:04:19.589111 2436 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{172-236-122-223.183f76d8a4540f7f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-122-223,UID:172-236-122-223,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-122-223,},FirstTimestamp:2025-05-14 18:04:13.285027711 +0000 UTC m=+0.489915861,LastTimestamp:2025-05-14 18:04:13.285027711 +0000 UTC m=+0.489915861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-122-223,}" May 14 18:04:19.589609 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:04:19.608644 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:04:19.608938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:04:19.608981 systemd[1]: kubelet.service: Consumed 871ms CPU time, 114.3M memory peak. May 14 18:04:19.611747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:04:19.780053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:04:19.787505 (kubelet)[2802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:04:19.845991 kubelet[2802]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:04:19.847158 kubelet[2802]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:04:19.847158 kubelet[2802]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:04:19.847158 kubelet[2802]: I0514 18:04:19.846407 2802 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:04:19.854512 kubelet[2802]: I0514 18:04:19.854494 2802 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 18:04:19.854646 kubelet[2802]: I0514 18:04:19.854634 2802 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:04:19.854891 kubelet[2802]: I0514 18:04:19.854879 2802 server.go:927] "Client rotation is on, will bootstrap in background" May 14 18:04:19.856030 kubelet[2802]: I0514 18:04:19.856015 2802 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 18:04:19.857316 kubelet[2802]: I0514 18:04:19.857256 2802 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:04:19.868420 kubelet[2802]: I0514 18:04:19.868406 2802 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:04:19.868802 kubelet[2802]: I0514 18:04:19.868756 2802 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:04:19.869013 kubelet[2802]: I0514 18:04:19.868865 2802 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-122-223","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 18:04:19.869179 kubelet[2802]: I0514 18:04:19.869166 2802 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:04:19.869233 kubelet[2802]: I0514 18:04:19.869225 2802 container_manager_linux.go:301] "Creating device plugin manager" May 14 18:04:19.869319 kubelet[2802]: I0514 18:04:19.869309 2802 state_mem.go:36] "Initialized new in-memory state store" May 14 18:04:19.869456 kubelet[2802]: I0514 18:04:19.869446 2802 kubelet.go:400] "Attempting to sync node with API server" May 14 18:04:19.869979 kubelet[2802]: I0514 18:04:19.869967 2802 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:04:19.870088 kubelet[2802]: I0514 18:04:19.870078 2802 kubelet.go:312] "Adding apiserver pod source" May 14 18:04:19.872182 kubelet[2802]: I0514 18:04:19.872168 2802 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:04:19.876487 kubelet[2802]: I0514 18:04:19.876460 2802 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:04:19.876819 kubelet[2802]: I0514 18:04:19.876808 2802 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:04:19.877833 kubelet[2802]: I0514 18:04:19.877821 2802 server.go:1264] "Started kubelet" May 14 18:04:19.880687 kubelet[2802]: I0514 18:04:19.880663 2802 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:04:19.882695 kubelet[2802]: I0514 18:04:19.882679 2802 server.go:455] "Adding debug handlers to kubelet server" May 14 18:04:19.883091 kubelet[2802]: I0514 18:04:19.883063 2802 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:04:19.884814 kubelet[2802]: I0514 18:04:19.884758 2802 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:04:19.885074 kubelet[2802]: I0514 18:04:19.885061 2802 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:04:19.889753 kubelet[2802]: I0514 18:04:19.889728 2802 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 18:04:19.890543 kubelet[2802]: I0514 18:04:19.890516 2802 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 18:04:19.890717 kubelet[2802]: I0514 18:04:19.890694 2802 reconciler.go:26] "Reconciler: start to sync state" May 14 18:04:19.892680 kubelet[2802]: I0514 18:04:19.892664 2802 factory.go:221] Registration of the systemd container factory successfully May 14 18:04:19.894422 kubelet[2802]: I0514 18:04:19.894195 2802 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:04:19.898430 kubelet[2802]: E0514 18:04:19.898388 2802 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:04:19.899967 kubelet[2802]: I0514 18:04:19.899871 2802 factory.go:221] Registration of the containerd container factory successfully May 14 18:04:19.901974 kubelet[2802]: I0514 18:04:19.901954 2802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:04:19.904379 kubelet[2802]: I0514 18:04:19.904120 2802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:04:19.904379 kubelet[2802]: I0514 18:04:19.904174 2802 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:04:19.904379 kubelet[2802]: I0514 18:04:19.904187 2802 kubelet.go:2337] "Starting kubelet main sync loop" May 14 18:04:19.904379 kubelet[2802]: E0514 18:04:19.904227 2802 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:04:19.960008 kubelet[2802]: I0514 18:04:19.959985 2802 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:04:19.960204 kubelet[2802]: I0514 18:04:19.960183 2802 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:04:19.960318 kubelet[2802]: I0514 18:04:19.960308 2802 state_mem.go:36] "Initialized new in-memory state store" May 14 18:04:19.960574 kubelet[2802]: I0514 18:04:19.960462 2802 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 18:04:19.960574 kubelet[2802]: I0514 18:04:19.960472 2802 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 18:04:19.960574 kubelet[2802]: I0514 18:04:19.960490 2802 policy_none.go:49] "None policy: Start" May 14 18:04:19.961092 kubelet[2802]: I0514 18:04:19.961080 2802 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:04:19.961869 kubelet[2802]: I0514 18:04:19.961205 2802 state_mem.go:35] "Initializing new in-memory state store" May 14 18:04:19.961869 kubelet[2802]: I0514 18:04:19.961320 2802 state_mem.go:75] "Updated machine memory state" May 14 18:04:19.965958 kubelet[2802]: I0514 18:04:19.965944 2802 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:04:19.966406 kubelet[2802]: I0514 18:04:19.966351 2802 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:04:19.966479 kubelet[2802]: I0514 18:04:19.966457 2802 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:04:19.993968 kubelet[2802]: I0514 18:04:19.993933 2802 kubelet_node_status.go:73] "Attempting to register node" node="172-236-122-223" May 14 18:04:20.004166 kubelet[2802]: I0514 18:04:20.002738 2802 kubelet_node_status.go:112] "Node was previously registered" node="172-236-122-223" May 14 18:04:20.004166 kubelet[2802]: I0514 18:04:20.002849 2802 kubelet_node_status.go:76] "Successfully registered node" node="172-236-122-223" May 14 18:04:20.004377 kubelet[2802]: I0514 18:04:20.004311 2802 topology_manager.go:215] "Topology Admit Handler" podUID="9d8cbeba869392a07542b23bf12f4ec9" podNamespace="kube-system" podName="kube-apiserver-172-236-122-223" May 14 18:04:20.004542 kubelet[2802]: I0514 18:04:20.004510 2802 topology_manager.go:215] "Topology Admit Handler" podUID="a375a262b488e68ead9bc84f7f88c7dc" podNamespace="kube-system" podName="kube-controller-manager-172-236-122-223" May 14 18:04:20.004668 kubelet[2802]: I0514 18:04:20.004654 2802 topology_manager.go:215] "Topology Admit Handler" podUID="3c2178138f47d102fbc1aab6df61dcf7" podNamespace="kube-system" podName="kube-scheduler-172-236-122-223" May 14 18:04:20.026639 kubelet[2802]: E0514 18:04:20.026473 2802 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-172-236-122-223\" already exists" pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:04:20.091833 kubelet[2802]: I0514 18:04:20.091784 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d8cbeba869392a07542b23bf12f4ec9-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-122-223\" (UID: \"9d8cbeba869392a07542b23bf12f4ec9\") " pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:04:20.091833 kubelet[2802]: I0514 18:04:20.091826 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a375a262b488e68ead9bc84f7f88c7dc-ca-certs\") pod \"kube-controller-manager-172-236-122-223\" (UID: \"a375a262b488e68ead9bc84f7f88c7dc\") " pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:04:20.091981 kubelet[2802]: I0514 18:04:20.091864 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a375a262b488e68ead9bc84f7f88c7dc-kubeconfig\") pod \"kube-controller-manager-172-236-122-223\" (UID: \"a375a262b488e68ead9bc84f7f88c7dc\") " pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:04:20.091981 kubelet[2802]: I0514 18:04:20.091888 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c2178138f47d102fbc1aab6df61dcf7-kubeconfig\") pod \"kube-scheduler-172-236-122-223\" (UID: \"3c2178138f47d102fbc1aab6df61dcf7\") " pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:04:20.091981 kubelet[2802]: I0514 18:04:20.091907 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d8cbeba869392a07542b23bf12f4ec9-ca-certs\") pod \"kube-apiserver-172-236-122-223\" (UID: \"9d8cbeba869392a07542b23bf12f4ec9\") " pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:04:20.091981 kubelet[2802]: I0514 18:04:20.091930 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a375a262b488e68ead9bc84f7f88c7dc-flexvolume-dir\") pod \"kube-controller-manager-172-236-122-223\" (UID: \"a375a262b488e68ead9bc84f7f88c7dc\") " pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:04:20.091981 kubelet[2802]: I0514 18:04:20.091971 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a375a262b488e68ead9bc84f7f88c7dc-k8s-certs\") pod \"kube-controller-manager-172-236-122-223\" (UID: \"a375a262b488e68ead9bc84f7f88c7dc\") " pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:04:20.092088 kubelet[2802]: I0514 18:04:20.091998 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a375a262b488e68ead9bc84f7f88c7dc-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-122-223\" (UID: \"a375a262b488e68ead9bc84f7f88c7dc\") " pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:04:20.092088 kubelet[2802]: I0514 18:04:20.092023 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d8cbeba869392a07542b23bf12f4ec9-k8s-certs\") pod \"kube-apiserver-172-236-122-223\" (UID: \"9d8cbeba869392a07542b23bf12f4ec9\") " pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:04:20.230102 sudo[2835]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 18:04:20.231190 sudo[2835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 18:04:20.313268 kubelet[2802]: E0514 18:04:20.313237 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:20.429519 kubelet[2802]: E0514 18:04:20.429487 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:20.430045 kubelet[2802]: E0514 18:04:20.430024 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:21.076777 kubelet[2802]: I0514 18:04:21.075257 2802 apiserver.go:52] "Watching apiserver" May 14 18:04:21.091049 kubelet[2802]: I0514 18:04:21.090951 2802 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 18:04:21.091935 kubelet[2802]: E0514 18:04:21.091821 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:21.093207 kubelet[2802]: E0514 18:04:21.093192 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:21.108757 kubelet[2802]: E0514 18:04:21.108740 2802 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-236-122-223\" already exists" pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:04:21.110082 kubelet[2802]: E0514 18:04:21.109599 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:21.280579 kubelet[2802]: I0514 18:04:21.280510 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-122-223" podStartSLOduration=1.280495251 podStartE2EDuration="1.280495251s" podCreationTimestamp="2025-05-14 18:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:04:21.280276894 +0000 UTC m=+1.487616866" watchObservedRunningTime="2025-05-14 18:04:21.280495251 +0000 UTC m=+1.487835213" May 14 18:04:21.280987 kubelet[2802]: I0514 18:04:21.280873 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-122-223" podStartSLOduration=3.280846043 podStartE2EDuration="3.280846043s" podCreationTimestamp="2025-05-14 18:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:04:21.253008406 +0000 UTC m=+1.460348368" watchObservedRunningTime="2025-05-14 18:04:21.280846043 +0000 UTC m=+1.488186005" May 14 18:04:21.311100 sudo[2835]: pam_unix(sudo:session): session closed for user root May 14 18:04:21.334626 kubelet[2802]: I0514 18:04:21.334434 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-122-223" podStartSLOduration=1.33441732 podStartE2EDuration="1.33441732s" podCreationTimestamp="2025-05-14 18:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:04:21.310281821 +0000 UTC m=+1.517621783" watchObservedRunningTime="2025-05-14 18:04:21.33441732 +0000 UTC m=+1.541757282" May 14 18:04:22.086737 kubelet[2802]: E0514 18:04:22.086703 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:22.088416 kubelet[2802]: E0514 18:04:22.088259 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:23.089837 kubelet[2802]: E0514 18:04:23.089730 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:23.186552 sudo[1787]: pam_unix(sudo:session): session closed for user root May 14 18:04:23.238628 sshd[1786]: Connection closed by 147.75.109.163 port 38028 May 14 18:04:23.239365 sshd-session[1784]: pam_unix(sshd:session): session closed for user core May 14 18:04:23.243834 systemd[1]: sshd@6-172.236.122.223:22-147.75.109.163:38028.service: Deactivated successfully. May 14 18:04:23.246895 systemd[1]: session-7.scope: Deactivated successfully. May 14 18:04:23.247116 systemd[1]: session-7.scope: Consumed 5.460s CPU time, 291.5M memory peak. May 14 18:04:23.250196 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. May 14 18:04:23.254607 systemd-logind[1515]: Removed session 7. May 14 18:04:26.247689 kubelet[2802]: E0514 18:04:26.247606 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:26.850868 kubelet[2802]: E0514 18:04:26.850809 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:27.095706 kubelet[2802]: E0514 18:04:27.095665 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:27.095864 kubelet[2802]: E0514 18:04:27.095799 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:30.467358 update_engine[1516]: I20250514 18:04:30.467288 1516 update_attempter.cc:509] Updating boot flags... May 14 18:04:31.398605 kubelet[2802]: E0514 18:04:31.398395 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:35.229738 kubelet[2802]: I0514 18:04:35.229540 2802 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 18:04:35.230695 containerd[1533]: time="2025-05-14T18:04:35.230347271Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 18:04:35.230960 kubelet[2802]: I0514 18:04:35.230562 2802 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 18:04:35.701200 kubelet[2802]: I0514 18:04:35.701071 2802 topology_manager.go:215] "Topology Admit Handler" podUID="a8477c48-0170-4eb0-b49c-9eaadad990cb" podNamespace="kube-system" podName="cilium-operator-599987898-6lcd5" May 14 18:04:35.712432 systemd[1]: Created slice kubepods-besteffort-poda8477c48_0170_4eb0_b49c_9eaadad990cb.slice - libcontainer container kubepods-besteffort-poda8477c48_0170_4eb0_b49c_9eaadad990cb.slice. May 14 18:04:35.727165 kubelet[2802]: I0514 18:04:35.726618 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9p52\" (UniqueName: \"kubernetes.io/projected/a8477c48-0170-4eb0-b49c-9eaadad990cb-kube-api-access-l9p52\") pod \"cilium-operator-599987898-6lcd5\" (UID: \"a8477c48-0170-4eb0-b49c-9eaadad990cb\") " pod="kube-system/cilium-operator-599987898-6lcd5" May 14 18:04:35.727286 kubelet[2802]: I0514 18:04:35.727256 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8477c48-0170-4eb0-b49c-9eaadad990cb-cilium-config-path\") pod \"cilium-operator-599987898-6lcd5\" (UID: \"a8477c48-0170-4eb0-b49c-9eaadad990cb\") " pod="kube-system/cilium-operator-599987898-6lcd5" May 14 18:04:36.023437 kubelet[2802]: I0514 18:04:36.022553 2802 topology_manager.go:215] "Topology Admit Handler" podUID="f41533ab-5191-4ce4-bafe-e364d9d291e7" podNamespace="kube-system" podName="kube-proxy-jqlt5" May 14 18:04:36.024853 kubelet[2802]: E0514 18:04:36.024834 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:36.026785 kubelet[2802]: I0514 18:04:36.026750 2802 topology_manager.go:215] "Topology Admit Handler" podUID="0586fba4-5080-424b-ac15-ac66e0a9d82f" podNamespace="kube-system" podName="cilium-4fzkc" May 14 18:04:36.028870 kubelet[2802]: I0514 18:04:36.028831 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f41533ab-5191-4ce4-bafe-e364d9d291e7-kube-proxy\") pod \"kube-proxy-jqlt5\" (UID: \"f41533ab-5191-4ce4-bafe-e364d9d291e7\") " pod="kube-system/kube-proxy-jqlt5" May 14 18:04:36.028921 kubelet[2802]: I0514 18:04:36.028876 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f41533ab-5191-4ce4-bafe-e364d9d291e7-lib-modules\") pod \"kube-proxy-jqlt5\" (UID: \"f41533ab-5191-4ce4-bafe-e364d9d291e7\") " pod="kube-system/kube-proxy-jqlt5" May 14 18:04:36.028921 kubelet[2802]: I0514 18:04:36.028914 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrstr\" (UniqueName: \"kubernetes.io/projected/f41533ab-5191-4ce4-bafe-e364d9d291e7-kube-api-access-rrstr\") pod \"kube-proxy-jqlt5\" (UID: \"f41533ab-5191-4ce4-bafe-e364d9d291e7\") " pod="kube-system/kube-proxy-jqlt5" May 14 18:04:36.028990 kubelet[2802]: I0514 18:04:36.028953 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f41533ab-5191-4ce4-bafe-e364d9d291e7-xtables-lock\") pod \"kube-proxy-jqlt5\" (UID: \"f41533ab-5191-4ce4-bafe-e364d9d291e7\") " pod="kube-system/kube-proxy-jqlt5" May 14 18:04:36.029467 containerd[1533]: time="2025-05-14T18:04:36.029387152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6lcd5,Uid:a8477c48-0170-4eb0-b49c-9eaadad990cb,Namespace:kube-system,Attempt:0,}" May 14 18:04:36.037315 systemd[1]: Created slice kubepods-besteffort-podf41533ab_5191_4ce4_bafe_e364d9d291e7.slice - libcontainer container kubepods-besteffort-podf41533ab_5191_4ce4_bafe_e364d9d291e7.slice. May 14 18:04:36.047810 systemd[1]: Created slice kubepods-burstable-pod0586fba4_5080_424b_ac15_ac66e0a9d82f.slice - libcontainer container kubepods-burstable-pod0586fba4_5080_424b_ac15_ac66e0a9d82f.slice. May 14 18:04:36.105011 containerd[1533]: time="2025-05-14T18:04:36.104951358Z" level=info msg="connecting to shim eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7" address="unix:///run/containerd/s/1a746b0482e4a25fbf82f5e0117bd8c99dfea2650649a56975927818c4c8e659" namespace=k8s.io protocol=ttrpc version=3 May 14 18:04:36.129369 kubelet[2802]: I0514 18:04:36.129335 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-xtables-lock\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.129369 kubelet[2802]: I0514 18:04:36.129369 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0586fba4-5080-424b-ac15-ac66e0a9d82f-cilium-config-path\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.129777 kubelet[2802]: I0514 18:04:36.129387 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-host-proc-sys-net\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.129777 kubelet[2802]: I0514 18:04:36.129401 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0586fba4-5080-424b-ac15-ac66e0a9d82f-hubble-tls\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.129777 kubelet[2802]: I0514 18:04:36.129425 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-cni-path\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.129777 kubelet[2802]: I0514 18:04:36.129675 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-lib-modules\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.129777 kubelet[2802]: I0514 18:04:36.129693 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcgcv\" (UniqueName: \"kubernetes.io/projected/0586fba4-5080-424b-ac15-ac66e0a9d82f-kube-api-access-kcgcv\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.129777 kubelet[2802]: I0514 18:04:36.129752 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-bpf-maps\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.130052 kubelet[2802]: I0514 18:04:36.129942 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-hostproc\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.130052 kubelet[2802]: I0514 18:04:36.129964 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-cilium-cgroup\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.130052 kubelet[2802]: I0514 18:04:36.129976 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-etc-cni-netd\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.130052 kubelet[2802]: I0514 18:04:36.129996 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-cilium-run\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.130052 kubelet[2802]: I0514 18:04:36.130009 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0586fba4-5080-424b-ac15-ac66e0a9d82f-clustermesh-secrets\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.130052 kubelet[2802]: I0514 18:04:36.130028 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-host-proc-sys-kernel\") pod \"cilium-4fzkc\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " pod="kube-system/cilium-4fzkc" May 14 18:04:36.192726 systemd[1]: Started cri-containerd-eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7.scope - libcontainer container eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7. May 14 18:04:36.343145 kubelet[2802]: E0514 18:04:36.343082 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:36.344120 containerd[1533]: time="2025-05-14T18:04:36.344053005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqlt5,Uid:f41533ab-5191-4ce4-bafe-e364d9d291e7,Namespace:kube-system,Attempt:0,}" May 14 18:04:36.371273 kubelet[2802]: E0514 18:04:36.371229 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:36.373441 containerd[1533]: time="2025-05-14T18:04:36.373403032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4fzkc,Uid:0586fba4-5080-424b-ac15-ac66e0a9d82f,Namespace:kube-system,Attempt:0,}" May 14 18:04:36.386540 containerd[1533]: time="2025-05-14T18:04:36.386490995Z" level=info msg="connecting to shim 3063a4e48a45abe94fde5811c049b5fc7d7bf4e0fafd026aa322413858ff0092" address="unix:///run/containerd/s/c11719cf8ccc78b6000f6cdd5774bf94592232607df52fcf90893bb78a60a431" namespace=k8s.io protocol=ttrpc version=3 May 14 18:04:36.470171 containerd[1533]: time="2025-05-14T18:04:36.470105126Z" level=info msg="connecting to shim 24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435" address="unix:///run/containerd/s/300edc295d9f71ae6980ea6b958801762313e9d4d717eab1ad1de6fcebd80b30" namespace=k8s.io protocol=ttrpc version=3 May 14 18:04:36.471441 containerd[1533]: time="2025-05-14T18:04:36.471414713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6lcd5,Uid:a8477c48-0170-4eb0-b49c-9eaadad990cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\"" May 14 18:04:36.472638 kubelet[2802]: E0514 18:04:36.472611 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:36.477151 containerd[1533]: time="2025-05-14T18:04:36.476977479Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 18:04:36.516338 systemd[1]: Started cri-containerd-3063a4e48a45abe94fde5811c049b5fc7d7bf4e0fafd026aa322413858ff0092.scope - libcontainer container 3063a4e48a45abe94fde5811c049b5fc7d7bf4e0fafd026aa322413858ff0092. May 14 18:04:36.521267 systemd[1]: Started cri-containerd-24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435.scope - libcontainer container 24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435. May 14 18:04:36.715099 containerd[1533]: time="2025-05-14T18:04:36.714506000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqlt5,Uid:f41533ab-5191-4ce4-bafe-e364d9d291e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3063a4e48a45abe94fde5811c049b5fc7d7bf4e0fafd026aa322413858ff0092\"" May 14 18:04:36.716926 kubelet[2802]: E0514 18:04:36.716411 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:36.720168 containerd[1533]: time="2025-05-14T18:04:36.720095464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4fzkc,Uid:0586fba4-5080-424b-ac15-ac66e0a9d82f,Namespace:kube-system,Attempt:0,} returns sandbox id \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\"" May 14 18:04:36.722003 kubelet[2802]: E0514 18:04:36.721906 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:36.722094 containerd[1533]: time="2025-05-14T18:04:36.722052456Z" level=info msg="CreateContainer within sandbox \"3063a4e48a45abe94fde5811c049b5fc7d7bf4e0fafd026aa322413858ff0092\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 18:04:36.738631 containerd[1533]: time="2025-05-14T18:04:36.738529082Z" level=info msg="Container 02089c1eb97d5c61d3b7c714b657d5538a1e739eb72dafb1ea5d6b1ce90753ce: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:36.744674 containerd[1533]: time="2025-05-14T18:04:36.744633746Z" level=info msg="CreateContainer within sandbox \"3063a4e48a45abe94fde5811c049b5fc7d7bf4e0fafd026aa322413858ff0092\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02089c1eb97d5c61d3b7c714b657d5538a1e739eb72dafb1ea5d6b1ce90753ce\"" May 14 18:04:36.745375 containerd[1533]: time="2025-05-14T18:04:36.745340077Z" level=info msg="StartContainer for \"02089c1eb97d5c61d3b7c714b657d5538a1e739eb72dafb1ea5d6b1ce90753ce\"" May 14 18:04:36.746926 containerd[1533]: time="2025-05-14T18:04:36.746876855Z" level=info msg="connecting to shim 02089c1eb97d5c61d3b7c714b657d5538a1e739eb72dafb1ea5d6b1ce90753ce" address="unix:///run/containerd/s/c11719cf8ccc78b6000f6cdd5774bf94592232607df52fcf90893bb78a60a431" protocol=ttrpc version=3 May 14 18:04:36.768425 systemd[1]: Started cri-containerd-02089c1eb97d5c61d3b7c714b657d5538a1e739eb72dafb1ea5d6b1ce90753ce.scope - libcontainer container 02089c1eb97d5c61d3b7c714b657d5538a1e739eb72dafb1ea5d6b1ce90753ce. May 14 18:04:36.817758 containerd[1533]: time="2025-05-14T18:04:36.817504660Z" level=info msg="StartContainer for \"02089c1eb97d5c61d3b7c714b657d5538a1e739eb72dafb1ea5d6b1ce90753ce\" returns successfully" May 14 18:04:37.137717 kubelet[2802]: E0514 18:04:37.137686 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:37.148342 kubelet[2802]: I0514 18:04:37.147496 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jqlt5" podStartSLOduration=1.147482326 podStartE2EDuration="1.147482326s" podCreationTimestamp="2025-05-14 18:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:04:37.147431838 +0000 UTC m=+17.354771800" watchObservedRunningTime="2025-05-14 18:04:37.147482326 +0000 UTC m=+17.354822288" May 14 18:04:37.792823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount467523436.mount: Deactivated successfully. May 14 18:04:39.001120 containerd[1533]: time="2025-05-14T18:04:39.001061590Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:39.002236 containerd[1533]: time="2025-05-14T18:04:39.002200972Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 14 18:04:39.002707 containerd[1533]: time="2025-05-14T18:04:39.002656827Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:39.003993 containerd[1533]: time="2025-05-14T18:04:39.003910726Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.526891958s" May 14 18:04:39.003993 containerd[1533]: time="2025-05-14T18:04:39.003937175Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 14 18:04:39.005293 containerd[1533]: time="2025-05-14T18:04:39.005274950Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 18:04:39.007364 containerd[1533]: time="2025-05-14T18:04:39.007331272Z" level=info msg="CreateContainer within sandbox \"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 18:04:39.021658 containerd[1533]: time="2025-05-14T18:04:39.019358753Z" level=info msg="Container a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:39.029400 containerd[1533]: time="2025-05-14T18:04:39.029367161Z" level=info msg="CreateContainer within sandbox \"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\"" May 14 18:04:39.030017 containerd[1533]: time="2025-05-14T18:04:39.029978050Z" level=info msg="StartContainer for \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\"" May 14 18:04:39.031210 containerd[1533]: time="2025-05-14T18:04:39.031190330Z" level=info msg="connecting to shim a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664" address="unix:///run/containerd/s/1a746b0482e4a25fbf82f5e0117bd8c99dfea2650649a56975927818c4c8e659" protocol=ttrpc version=3 May 14 18:04:39.070257 systemd[1]: Started cri-containerd-a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664.scope - libcontainer container a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664. May 14 18:04:39.262179 containerd[1533]: time="2025-05-14T18:04:39.262062476Z" level=info msg="StartContainer for \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\" returns successfully" May 14 18:04:40.339314 kubelet[2802]: E0514 18:04:40.339176 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:41.411900 kubelet[2802]: E0514 18:04:41.356295 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:45.505207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121543449.mount: Deactivated successfully. May 14 18:04:48.900995 containerd[1533]: time="2025-05-14T18:04:48.900297331Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:48.900995 containerd[1533]: time="2025-05-14T18:04:48.900946059Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 14 18:04:48.901798 containerd[1533]: time="2025-05-14T18:04:48.901762594Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:04:48.902889 containerd[1533]: time="2025-05-14T18:04:48.902845653Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.897489836s" May 14 18:04:48.902889 containerd[1533]: time="2025-05-14T18:04:48.902884903Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 14 18:04:48.907257 containerd[1533]: time="2025-05-14T18:04:48.907228302Z" level=info msg="CreateContainer within sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:04:48.917780 containerd[1533]: time="2025-05-14T18:04:48.916231965Z" level=info msg="Container df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:48.922262 containerd[1533]: time="2025-05-14T18:04:48.922178174Z" level=info msg="CreateContainer within sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\"" May 14 18:04:48.923266 containerd[1533]: time="2025-05-14T18:04:48.923246315Z" level=info msg="StartContainer for \"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\"" May 14 18:04:48.924796 containerd[1533]: time="2025-05-14T18:04:48.924068929Z" level=info msg="connecting to shim df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5" address="unix:///run/containerd/s/300edc295d9f71ae6980ea6b958801762313e9d4d717eab1ad1de6fcebd80b30" protocol=ttrpc version=3 May 14 18:04:48.964272 systemd[1]: Started cri-containerd-df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5.scope - libcontainer container df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5. May 14 18:04:49.003214 containerd[1533]: time="2025-05-14T18:04:49.002568623Z" level=info msg="StartContainer for \"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\" returns successfully" May 14 18:04:49.020876 systemd[1]: cri-containerd-df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5.scope: Deactivated successfully. May 14 18:04:49.023194 containerd[1533]: time="2025-05-14T18:04:49.023030837Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\" id:\"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\" pid:3281 exited_at:{seconds:1747245889 nanos:22665593}" May 14 18:04:49.023194 containerd[1533]: time="2025-05-14T18:04:49.023074116Z" level=info msg="received exit event container_id:\"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\" id:\"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\" pid:3281 exited_at:{seconds:1747245889 nanos:22665593}" May 14 18:04:49.055496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5-rootfs.mount: Deactivated successfully. May 14 18:04:49.421012 kubelet[2802]: E0514 18:04:49.420884 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:49.425506 containerd[1533]: time="2025-05-14T18:04:49.425432081Z" level=info msg="CreateContainer within sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:04:49.437078 containerd[1533]: time="2025-05-14T18:04:49.436784483Z" level=info msg="Container a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:49.442073 containerd[1533]: time="2025-05-14T18:04:49.441426333Z" level=info msg="CreateContainer within sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\"" May 14 18:04:49.443892 kubelet[2802]: I0514 18:04:49.443844 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-6lcd5" podStartSLOduration=11.914880088 podStartE2EDuration="14.443828231s" podCreationTimestamp="2025-05-14 18:04:35 +0000 UTC" firstStartedPulling="2025-05-14 18:04:36.476041927 +0000 UTC m=+16.683381899" lastFinishedPulling="2025-05-14 18:04:39.00499008 +0000 UTC m=+19.212330042" observedRunningTime="2025-05-14 18:04:40.417368409 +0000 UTC m=+20.624708371" watchObservedRunningTime="2025-05-14 18:04:49.443828231 +0000 UTC m=+29.651168203" May 14 18:04:49.444654 containerd[1533]: time="2025-05-14T18:04:49.444630327Z" level=info msg="StartContainer for \"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\"" May 14 18:04:49.446967 containerd[1533]: time="2025-05-14T18:04:49.446785329Z" level=info msg="connecting to shim a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9" address="unix:///run/containerd/s/300edc295d9f71ae6980ea6b958801762313e9d4d717eab1ad1de6fcebd80b30" protocol=ttrpc version=3 May 14 18:04:49.474272 systemd[1]: Started cri-containerd-a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9.scope - libcontainer container a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9. May 14 18:04:49.511666 containerd[1533]: time="2025-05-14T18:04:49.511635400Z" level=info msg="StartContainer for \"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\" returns successfully" May 14 18:04:49.526447 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:04:49.527016 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:04:49.527247 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 18:04:49.530621 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:04:49.549623 systemd[1]: cri-containerd-a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9.scope: Deactivated successfully. May 14 18:04:49.549865 containerd[1533]: time="2025-05-14T18:04:49.549831205Z" level=info msg="received exit event container_id:\"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\" id:\"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\" pid:3327 exited_at:{seconds:1747245889 nanos:549600819}" May 14 18:04:49.551320 containerd[1533]: time="2025-05-14T18:04:49.550508003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\" id:\"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\" pid:3327 exited_at:{seconds:1747245889 nanos:549600819}" May 14 18:04:49.575214 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:04:50.113768 kubelet[2802]: I0514 18:04:50.113733 2802 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:04:50.113768 kubelet[2802]: I0514 18:04:50.113766 2802 container_gc.go:88] "Attempting to delete unused containers" May 14 18:04:50.116405 kubelet[2802]: I0514 18:04:50.116383 2802 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:04:50.125401 kubelet[2802]: I0514 18:04:50.125383 2802 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:04:50.125475 kubelet[2802]: I0514 18:04:50.125458 2802 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-4fzkc","kube-system/cilium-operator-599987898-6lcd5","kube-system/kube-controller-manager-172-236-122-223","kube-system/kube-proxy-jqlt5","kube-system/kube-apiserver-172-236-122-223","kube-system/kube-scheduler-172-236-122-223"] May 14 18:04:50.125521 kubelet[2802]: E0514 18:04:50.125499 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-4fzkc" May 14 18:04:50.125521 kubelet[2802]: E0514 18:04:50.125511 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-6lcd5" May 14 18:04:50.125521 kubelet[2802]: E0514 18:04:50.125519 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:04:50.125582 kubelet[2802]: E0514 18:04:50.125527 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jqlt5" May 14 18:04:50.125582 kubelet[2802]: E0514 18:04:50.125535 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:04:50.125582 kubelet[2802]: E0514 18:04:50.125542 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:04:50.125582 kubelet[2802]: I0514 18:04:50.125550 2802 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:04:50.426760 kubelet[2802]: E0514 18:04:50.426270 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:50.430060 containerd[1533]: time="2025-05-14T18:04:50.429999308Z" level=info msg="CreateContainer within sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:04:50.444473 containerd[1533]: time="2025-05-14T18:04:50.444403583Z" level=info msg="Container 06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:50.448551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1202321669.mount: Deactivated successfully. May 14 18:04:50.453893 containerd[1533]: time="2025-05-14T18:04:50.453848189Z" level=info msg="CreateContainer within sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\"" May 14 18:04:50.455196 containerd[1533]: time="2025-05-14T18:04:50.454655146Z" level=info msg="StartContainer for \"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\"" May 14 18:04:50.456206 containerd[1533]: time="2025-05-14T18:04:50.456186771Z" level=info msg="connecting to shim 06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc" address="unix:///run/containerd/s/300edc295d9f71ae6980ea6b958801762313e9d4d717eab1ad1de6fcebd80b30" protocol=ttrpc version=3 May 14 18:04:50.485258 systemd[1]: Started cri-containerd-06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc.scope - libcontainer container 06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc. May 14 18:04:50.527296 containerd[1533]: time="2025-05-14T18:04:50.527268091Z" level=info msg="StartContainer for \"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\" returns successfully" May 14 18:04:50.530755 systemd[1]: cri-containerd-06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc.scope: Deactivated successfully. May 14 18:04:50.535100 containerd[1533]: time="2025-05-14T18:04:50.535065303Z" level=info msg="received exit event container_id:\"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\" id:\"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\" pid:3374 exited_at:{seconds:1747245890 nanos:534651480}" May 14 18:04:50.535407 containerd[1533]: time="2025-05-14T18:04:50.535311899Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\" id:\"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\" pid:3374 exited_at:{seconds:1747245890 nanos:534651480}" May 14 18:04:50.570634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc-rootfs.mount: Deactivated successfully. May 14 18:04:51.430649 kubelet[2802]: E0514 18:04:51.430600 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:51.435410 containerd[1533]: time="2025-05-14T18:04:51.435355792Z" level=info msg="CreateContainer within sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:04:51.452068 containerd[1533]: time="2025-05-14T18:04:51.451968628Z" level=info msg="Container 32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:51.456813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549934230.mount: Deactivated successfully. May 14 18:04:51.462361 containerd[1533]: time="2025-05-14T18:04:51.462332309Z" level=info msg="CreateContainer within sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\"" May 14 18:04:51.463211 containerd[1533]: time="2025-05-14T18:04:51.462926480Z" level=info msg="StartContainer for \"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\"" May 14 18:04:51.464204 containerd[1533]: time="2025-05-14T18:04:51.464172081Z" level=info msg="connecting to shim 32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd" address="unix:///run/containerd/s/300edc295d9f71ae6980ea6b958801762313e9d4d717eab1ad1de6fcebd80b30" protocol=ttrpc version=3 May 14 18:04:51.490290 systemd[1]: Started cri-containerd-32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd.scope - libcontainer container 32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd. May 14 18:04:51.537009 systemd[1]: cri-containerd-32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd.scope: Deactivated successfully. May 14 18:04:51.539741 containerd[1533]: time="2025-05-14T18:04:51.539570717Z" level=info msg="received exit event container_id:\"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\" id:\"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\" pid:3412 exited_at:{seconds:1747245891 nanos:539215293}" May 14 18:04:51.540586 containerd[1533]: time="2025-05-14T18:04:51.540544232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\" id:\"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\" pid:3412 exited_at:{seconds:1747245891 nanos:539215293}" May 14 18:04:51.547085 containerd[1533]: time="2025-05-14T18:04:51.547059733Z" level=info msg="StartContainer for \"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\" returns successfully" May 14 18:04:51.558700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd-rootfs.mount: Deactivated successfully. May 14 18:04:52.435655 kubelet[2802]: E0514 18:04:52.435616 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:52.440220 containerd[1533]: time="2025-05-14T18:04:52.439689442Z" level=info msg="CreateContainer within sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:04:52.457256 containerd[1533]: time="2025-05-14T18:04:52.457216091Z" level=info msg="Container c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf: CDI devices from CRI Config.CDIDevices: []" May 14 18:04:52.460028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount129000186.mount: Deactivated successfully. May 14 18:04:52.467727 containerd[1533]: time="2025-05-14T18:04:52.467692771Z" level=info msg="CreateContainer within sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\"" May 14 18:04:52.468346 containerd[1533]: time="2025-05-14T18:04:52.468295002Z" level=info msg="StartContainer for \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\"" May 14 18:04:52.469240 containerd[1533]: time="2025-05-14T18:04:52.469169209Z" level=info msg="connecting to shim c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf" address="unix:///run/containerd/s/300edc295d9f71ae6980ea6b958801762313e9d4d717eab1ad1de6fcebd80b30" protocol=ttrpc version=3 May 14 18:04:52.493267 systemd[1]: Started cri-containerd-c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf.scope - libcontainer container c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf. May 14 18:04:52.531530 containerd[1533]: time="2025-05-14T18:04:52.531475846Z" level=info msg="StartContainer for \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\" returns successfully" May 14 18:04:52.699742 containerd[1533]: time="2025-05-14T18:04:52.699643523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\" id:\"fc1c96db26ee581dd571a0ab3c5b6675e9ebc0b0a7761bdaa658c12fc2814eaa\" pid:3481 exited_at:{seconds:1747245892 nanos:699233279}" May 14 18:04:52.756443 kubelet[2802]: I0514 18:04:52.755297 2802 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 18:04:53.444768 kubelet[2802]: E0514 18:04:53.443996 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:54.445603 kubelet[2802]: E0514 18:04:54.445568 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:55.117534 systemd-networkd[1457]: cilium_host: Link UP May 14 18:04:55.118493 systemd-networkd[1457]: cilium_net: Link UP May 14 18:04:55.121364 systemd-networkd[1457]: cilium_host: Gained carrier May 14 18:04:55.121553 systemd-networkd[1457]: cilium_net: Gained carrier May 14 18:04:55.243852 systemd-networkd[1457]: cilium_vxlan: Link UP May 14 18:04:55.243939 systemd-networkd[1457]: cilium_vxlan: Gained carrier May 14 18:04:55.337276 systemd-networkd[1457]: cilium_net: Gained IPv6LL May 14 18:04:55.449337 kubelet[2802]: E0514 18:04:55.449236 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:55.536301 kernel: NET: Registered PF_ALG protocol family May 14 18:04:55.961437 systemd-networkd[1457]: cilium_host: Gained IPv6LL May 14 18:04:56.776834 systemd-networkd[1457]: lxc_health: Link UP May 14 18:04:56.777992 systemd-networkd[1457]: cilium_vxlan: Gained IPv6LL May 14 18:04:56.786719 systemd-networkd[1457]: lxc_health: Gained carrier May 14 18:04:56.789119 kubelet[2802]: E0514 18:04:56.789090 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:04:56.814166 kubelet[2802]: I0514 18:04:56.812834 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4fzkc" podStartSLOduration=8.632420942 podStartE2EDuration="20.812819767s" podCreationTimestamp="2025-05-14 18:04:36 +0000 UTC" firstStartedPulling="2025-05-14 18:04:36.723399331 +0000 UTC m=+16.930739293" lastFinishedPulling="2025-05-14 18:04:48.903798156 +0000 UTC m=+29.111138118" observedRunningTime="2025-05-14 18:04:53.46798982 +0000 UTC m=+33.675329782" watchObservedRunningTime="2025-05-14 18:04:56.812819767 +0000 UTC m=+37.020159729" May 14 18:04:58.585339 systemd-networkd[1457]: lxc_health: Gained IPv6LL May 14 18:05:00.171711 kubelet[2802]: I0514 18:05:00.171625 2802 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:05:00.171711 kubelet[2802]: I0514 18:05:00.171707 2802 container_gc.go:88] "Attempting to delete unused containers" May 14 18:05:00.175158 kubelet[2802]: I0514 18:05:00.175095 2802 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:05:00.197853 kubelet[2802]: I0514 18:05:00.197824 2802 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:05:00.198432 kubelet[2802]: I0514 18:05:00.198396 2802 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-6lcd5","kube-system/cilium-4fzkc","kube-system/kube-controller-manager-172-236-122-223","kube-system/kube-proxy-jqlt5","kube-system/kube-apiserver-172-236-122-223","kube-system/kube-scheduler-172-236-122-223"] May 14 18:05:00.198595 kubelet[2802]: E0514 18:05:00.198571 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-6lcd5" May 14 18:05:00.198625 kubelet[2802]: E0514 18:05:00.198598 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-4fzkc" May 14 18:05:00.198731 kubelet[2802]: E0514 18:05:00.198709 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:05:00.198731 kubelet[2802]: E0514 18:05:00.198731 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jqlt5" May 14 18:05:00.198788 kubelet[2802]: E0514 18:05:00.198745 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:05:00.198788 kubelet[2802]: E0514 18:05:00.198754 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:05:00.198788 kubelet[2802]: I0514 18:05:00.198763 2802 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:05:01.553650 kubelet[2802]: I0514 18:05:01.553336 2802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:05:01.554512 kubelet[2802]: E0514 18:05:01.554301 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:05:01.798821 kubelet[2802]: E0514 18:05:01.798794 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:05:10.216426 kubelet[2802]: I0514 18:05:10.216200 2802 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:05:10.216426 kubelet[2802]: I0514 18:05:10.216420 2802 container_gc.go:88] "Attempting to delete unused containers" May 14 18:05:10.218585 kubelet[2802]: I0514 18:05:10.218518 2802 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:05:10.232069 kubelet[2802]: I0514 18:05:10.232041 2802 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:05:10.232239 kubelet[2802]: I0514 18:05:10.232156 2802 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-6lcd5","kube-system/cilium-4fzkc","kube-system/kube-controller-manager-172-236-122-223","kube-system/kube-proxy-jqlt5","kube-system/kube-apiserver-172-236-122-223","kube-system/kube-scheduler-172-236-122-223"] May 14 18:05:10.232239 kubelet[2802]: E0514 18:05:10.232213 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-6lcd5" May 14 18:05:10.232239 kubelet[2802]: E0514 18:05:10.232225 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-4fzkc" May 14 18:05:10.232239 kubelet[2802]: E0514 18:05:10.232234 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:05:10.232239 kubelet[2802]: E0514 18:05:10.232243 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jqlt5" May 14 18:05:10.232420 kubelet[2802]: E0514 18:05:10.232250 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:05:10.232420 kubelet[2802]: E0514 18:05:10.232257 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:05:10.232420 kubelet[2802]: I0514 18:05:10.232266 2802 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:05:20.257858 kubelet[2802]: I0514 18:05:20.257723 2802 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:05:20.257858 kubelet[2802]: I0514 18:05:20.257795 2802 container_gc.go:88] "Attempting to delete unused containers" May 14 18:05:20.260847 kubelet[2802]: I0514 18:05:20.260820 2802 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:05:20.275127 kubelet[2802]: I0514 18:05:20.275095 2802 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:05:20.275279 kubelet[2802]: I0514 18:05:20.275252 2802 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-6lcd5","kube-system/cilium-4fzkc","kube-system/kube-proxy-jqlt5","kube-system/kube-controller-manager-172-236-122-223","kube-system/kube-apiserver-172-236-122-223","kube-system/kube-scheduler-172-236-122-223"] May 14 18:05:20.275375 kubelet[2802]: E0514 18:05:20.275309 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-6lcd5" May 14 18:05:20.275375 kubelet[2802]: E0514 18:05:20.275341 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-4fzkc" May 14 18:05:20.275375 kubelet[2802]: E0514 18:05:20.275358 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jqlt5" May 14 18:05:20.275521 kubelet[2802]: E0514 18:05:20.275382 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:05:20.275521 kubelet[2802]: E0514 18:05:20.275397 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:05:20.275521 kubelet[2802]: E0514 18:05:20.275412 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:05:20.275521 kubelet[2802]: I0514 18:05:20.275428 2802 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:05:30.289484 kubelet[2802]: I0514 18:05:30.289435 2802 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:05:30.289484 kubelet[2802]: I0514 18:05:30.289474 2802 container_gc.go:88] "Attempting to delete unused containers" May 14 18:05:30.291638 kubelet[2802]: I0514 18:05:30.291623 2802 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:05:30.301300 kubelet[2802]: I0514 18:05:30.301283 2802 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:05:30.301390 kubelet[2802]: I0514 18:05:30.301375 2802 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-6lcd5","kube-system/cilium-4fzkc","kube-system/kube-proxy-jqlt5","kube-system/kube-controller-manager-172-236-122-223","kube-system/kube-apiserver-172-236-122-223","kube-system/kube-scheduler-172-236-122-223"] May 14 18:05:30.301421 kubelet[2802]: E0514 18:05:30.301413 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-6lcd5" May 14 18:05:30.301446 kubelet[2802]: E0514 18:05:30.301424 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-4fzkc" May 14 18:05:30.301446 kubelet[2802]: E0514 18:05:30.301432 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jqlt5" May 14 18:05:30.301446 kubelet[2802]: E0514 18:05:30.301440 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:05:30.301505 kubelet[2802]: E0514 18:05:30.301448 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:05:30.301505 kubelet[2802]: E0514 18:05:30.301456 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:05:30.301505 kubelet[2802]: I0514 18:05:30.301464 2802 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:05:30.905712 kubelet[2802]: E0514 18:05:30.905678 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:05:38.905090 kubelet[2802]: E0514 18:05:38.905046 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:05:40.316432 kubelet[2802]: I0514 18:05:40.316406 2802 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:05:40.316432 kubelet[2802]: I0514 18:05:40.316439 2802 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:05:40.316806 kubelet[2802]: I0514 18:05:40.316497 2802 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-6lcd5","kube-system/cilium-4fzkc","kube-system/kube-proxy-jqlt5","kube-system/kube-controller-manager-172-236-122-223","kube-system/kube-apiserver-172-236-122-223","kube-system/kube-scheduler-172-236-122-223"] May 14 18:05:40.316806 kubelet[2802]: E0514 18:05:40.316523 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-6lcd5" May 14 18:05:40.316806 kubelet[2802]: E0514 18:05:40.316534 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-4fzkc" May 14 18:05:40.316806 kubelet[2802]: E0514 18:05:40.316541 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jqlt5" May 14 18:05:40.316806 kubelet[2802]: E0514 18:05:40.316549 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:05:40.316806 kubelet[2802]: E0514 18:05:40.316556 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:05:40.316806 kubelet[2802]: E0514 18:05:40.316563 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:05:40.316806 kubelet[2802]: I0514 18:05:40.316571 2802 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:05:41.907020 kubelet[2802]: E0514 18:05:41.906943 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:05:50.334184 kubelet[2802]: I0514 18:05:50.334154 2802 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:05:50.334184 kubelet[2802]: I0514 18:05:50.334189 2802 container_gc.go:88] "Attempting to delete unused containers" May 14 18:05:50.336301 kubelet[2802]: I0514 18:05:50.336280 2802 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:05:50.346045 kubelet[2802]: I0514 18:05:50.345866 2802 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:05:50.346045 kubelet[2802]: I0514 18:05:50.345951 2802 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-6lcd5","kube-system/cilium-4fzkc","kube-system/kube-proxy-jqlt5","kube-system/kube-controller-manager-172-236-122-223","kube-system/kube-apiserver-172-236-122-223","kube-system/kube-scheduler-172-236-122-223"] May 14 18:05:50.346045 kubelet[2802]: E0514 18:05:50.345983 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-6lcd5" May 14 18:05:50.346045 kubelet[2802]: E0514 18:05:50.345994 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-4fzkc" May 14 18:05:50.346045 kubelet[2802]: E0514 18:05:50.346002 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jqlt5" May 14 18:05:50.346045 kubelet[2802]: E0514 18:05:50.346010 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:05:50.346045 kubelet[2802]: E0514 18:05:50.346017 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:05:50.346045 kubelet[2802]: E0514 18:05:50.346023 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:05:50.346045 kubelet[2802]: I0514 18:05:50.346032 2802 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:05:50.905238 kubelet[2802]: E0514 18:05:50.905005 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:05:50.905849 kubelet[2802]: E0514 18:05:50.905824 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:06:00.364099 kubelet[2802]: I0514 18:06:00.363391 2802 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:06:00.364099 kubelet[2802]: I0514 18:06:00.363436 2802 container_gc.go:88] "Attempting to delete unused containers" May 14 18:06:00.366809 kubelet[2802]: I0514 18:06:00.366797 2802 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:06:00.379239 kubelet[2802]: I0514 18:06:00.379200 2802 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:06:00.379430 kubelet[2802]: I0514 18:06:00.379404 2802 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-6lcd5","kube-system/cilium-4fzkc","kube-system/kube-proxy-jqlt5","kube-system/kube-controller-manager-172-236-122-223","kube-system/kube-apiserver-172-236-122-223","kube-system/kube-scheduler-172-236-122-223"] May 14 18:06:00.379464 kubelet[2802]: E0514 18:06:00.379452 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-6lcd5" May 14 18:06:00.379491 kubelet[2802]: E0514 18:06:00.379466 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-4fzkc" May 14 18:06:00.379491 kubelet[2802]: E0514 18:06:00.379475 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jqlt5" May 14 18:06:00.379491 kubelet[2802]: E0514 18:06:00.379485 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:06:00.379590 kubelet[2802]: E0514 18:06:00.379495 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:06:00.379590 kubelet[2802]: E0514 18:06:00.379503 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:06:00.379590 kubelet[2802]: I0514 18:06:00.379513 2802 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:06:10.401006 kubelet[2802]: I0514 18:06:10.400969 2802 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:06:10.402071 kubelet[2802]: I0514 18:06:10.401024 2802 container_gc.go:88] "Attempting to delete unused containers" May 14 18:06:10.403328 kubelet[2802]: I0514 18:06:10.403309 2802 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:06:10.414013 kubelet[2802]: I0514 18:06:10.413987 2802 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:06:10.414212 kubelet[2802]: I0514 18:06:10.414195 2802 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-6lcd5","kube-system/cilium-4fzkc","kube-system/kube-proxy-jqlt5","kube-system/kube-controller-manager-172-236-122-223","kube-system/kube-apiserver-172-236-122-223","kube-system/kube-scheduler-172-236-122-223"] May 14 18:06:10.414300 kubelet[2802]: E0514 18:06:10.414268 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-6lcd5" May 14 18:06:10.414300 kubelet[2802]: E0514 18:06:10.414286 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-4fzkc" May 14 18:06:10.414300 kubelet[2802]: E0514 18:06:10.414294 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jqlt5" May 14 18:06:10.414300 kubelet[2802]: E0514 18:06:10.414302 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:06:10.414418 kubelet[2802]: E0514 18:06:10.414310 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:06:10.414418 kubelet[2802]: E0514 18:06:10.414318 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:06:10.414418 kubelet[2802]: I0514 18:06:10.414326 2802 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:06:17.359810 systemd[1]: Started sshd@7-172.236.122.223:22-147.75.109.163:54128.service - OpenSSH per-connection server daemon (147.75.109.163:54128). May 14 18:06:17.696170 sshd[3918]: Accepted publickey for core from 147.75.109.163 port 54128 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:06:17.698421 sshd-session[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:17.704724 systemd-logind[1515]: New session 8 of user core. May 14 18:06:17.710253 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 18:06:18.070823 sshd[3920]: Connection closed by 147.75.109.163 port 54128 May 14 18:06:18.071643 sshd-session[3918]: pam_unix(sshd:session): session closed for user core May 14 18:06:18.075616 systemd[1]: sshd@7-172.236.122.223:22-147.75.109.163:54128.service: Deactivated successfully. May 14 18:06:18.077675 systemd[1]: session-8.scope: Deactivated successfully. May 14 18:06:18.078859 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. May 14 18:06:18.081530 systemd-logind[1515]: Removed session 8. May 14 18:06:19.905957 kubelet[2802]: E0514 18:06:19.905767 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:06:20.433296 kubelet[2802]: I0514 18:06:20.433264 2802 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:06:20.433296 kubelet[2802]: I0514 18:06:20.433299 2802 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:06:20.433546 kubelet[2802]: I0514 18:06:20.433387 2802 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-6lcd5","kube-system/cilium-4fzkc","kube-system/kube-controller-manager-172-236-122-223","kube-system/kube-proxy-jqlt5","kube-system/kube-apiserver-172-236-122-223","kube-system/kube-scheduler-172-236-122-223"] May 14 18:06:20.433546 kubelet[2802]: E0514 18:06:20.433417 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-6lcd5" May 14 18:06:20.433546 kubelet[2802]: E0514 18:06:20.433440 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-4fzkc" May 14 18:06:20.433546 kubelet[2802]: E0514 18:06:20.433449 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:06:20.433546 kubelet[2802]: E0514 18:06:20.433457 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jqlt5" May 14 18:06:20.433546 kubelet[2802]: E0514 18:06:20.433466 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:06:20.433546 kubelet[2802]: E0514 18:06:20.433474 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:06:20.433546 kubelet[2802]: I0514 18:06:20.433482 2802 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:06:23.137263 systemd[1]: Started sshd@8-172.236.122.223:22-147.75.109.163:39938.service - OpenSSH per-connection server daemon (147.75.109.163:39938). May 14 18:06:23.483480 sshd[3936]: Accepted publickey for core from 147.75.109.163 port 39938 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:06:23.484677 sshd-session[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:23.488967 systemd-logind[1515]: New session 9 of user core. May 14 18:06:23.493240 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 18:06:23.813556 sshd[3938]: Connection closed by 147.75.109.163 port 39938 May 14 18:06:23.814694 sshd-session[3936]: pam_unix(sshd:session): session closed for user core May 14 18:06:23.819216 systemd[1]: sshd@8-172.236.122.223:22-147.75.109.163:39938.service: Deactivated successfully. May 14 18:06:23.821665 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:06:23.823760 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. May 14 18:06:23.825351 systemd-logind[1515]: Removed session 9. May 14 18:06:28.874163 systemd[1]: Started sshd@9-172.236.122.223:22-147.75.109.163:54590.service - OpenSSH per-connection server daemon (147.75.109.163:54590). May 14 18:06:29.213260 sshd[3951]: Accepted publickey for core from 147.75.109.163 port 54590 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:06:29.214902 sshd-session[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:29.220263 systemd-logind[1515]: New session 10 of user core. May 14 18:06:29.227310 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:06:29.535483 sshd[3953]: Connection closed by 147.75.109.163 port 54590 May 14 18:06:29.536106 sshd-session[3951]: pam_unix(sshd:session): session closed for user core May 14 18:06:29.541674 systemd[1]: sshd@9-172.236.122.223:22-147.75.109.163:54590.service: Deactivated successfully. May 14 18:06:29.544715 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:06:29.545967 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. May 14 18:06:29.547946 systemd-logind[1515]: Removed session 10. May 14 18:06:30.452894 kubelet[2802]: I0514 18:06:30.452849 2802 eviction_manager.go:366] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 14 18:06:30.452894 kubelet[2802]: I0514 18:06:30.452897 2802 container_gc.go:88] "Attempting to delete unused containers" May 14 18:06:30.455167 kubelet[2802]: I0514 18:06:30.455153 2802 image_gc_manager.go:404] "Attempting to delete unused images" May 14 18:06:30.456383 kubelet[2802]: I0514 18:06:30.456355 2802 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" size=57236178 runtimeHandler="" May 14 18:06:30.456761 containerd[1533]: time="2025-05-14T18:06:30.456724175Z" level=info msg="RemoveImage \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 14 18:06:30.458483 containerd[1533]: time="2025-05-14T18:06:30.458442825Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.12-0\"" May 14 18:06:30.459366 containerd[1533]: time="2025-05-14T18:06:30.459308835Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\"" May 14 18:06:30.460052 containerd[1533]: time="2025-05-14T18:06:30.459865295Z" level=info msg="RemoveImage \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" returns successfully" May 14 18:06:30.460052 containerd[1533]: time="2025-05-14T18:06:30.460007165Z" level=info msg="ImageDelete event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 14 18:06:30.460260 kubelet[2802]: I0514 18:06:30.460237 2802 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" size=18182961 runtimeHandler="" May 14 18:06:30.460567 containerd[1533]: time="2025-05-14T18:06:30.460536805Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 18:06:30.461302 containerd[1533]: time="2025-05-14T18:06:30.461266175Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 18:06:30.461878 containerd[1533]: time="2025-05-14T18:06:30.461828265Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"" May 14 18:06:30.462291 containerd[1533]: time="2025-05-14T18:06:30.462246665Z" level=info msg="RemoveImage \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" returns successfully" May 14 18:06:30.462434 containerd[1533]: time="2025-05-14T18:06:30.462346285Z" level=info msg="ImageDelete event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 18:06:30.462637 kubelet[2802]: I0514 18:06:30.462553 2802 image_gc_manager.go:460] "Removing image to free bytes" imageID="sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" size=321520 runtimeHandler="" May 14 18:06:30.462862 containerd[1533]: time="2025-05-14T18:06:30.462770175Z" level=info msg="RemoveImage \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 14 18:06:30.463625 containerd[1533]: time="2025-05-14T18:06:30.463589445Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.9\"" May 14 18:06:30.464378 containerd[1533]: time="2025-05-14T18:06:30.464351205Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\"" May 14 18:06:30.464773 containerd[1533]: time="2025-05-14T18:06:30.464736765Z" level=info msg="RemoveImage \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" returns successfully" May 14 18:06:30.466158 containerd[1533]: time="2025-05-14T18:06:30.464814755Z" level=info msg="ImageDelete event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 14 18:06:30.475891 kubelet[2802]: I0514 18:06:30.475868 2802 eviction_manager.go:377] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 14 18:06:30.475980 kubelet[2802]: I0514 18:06:30.475951 2802 eviction_manager.go:395] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-599987898-6lcd5","kube-system/cilium-4fzkc","kube-system/kube-proxy-jqlt5","kube-system/kube-controller-manager-172-236-122-223","kube-system/kube-apiserver-172-236-122-223","kube-system/kube-scheduler-172-236-122-223"] May 14 18:06:30.476010 kubelet[2802]: E0514 18:06:30.475999 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-599987898-6lcd5" May 14 18:06:30.476046 kubelet[2802]: E0514 18:06:30.476013 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-4fzkc" May 14 18:06:30.476046 kubelet[2802]: E0514 18:06:30.476022 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jqlt5" May 14 18:06:30.476046 kubelet[2802]: E0514 18:06:30.476030 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-223" May 14 18:06:30.476046 kubelet[2802]: E0514 18:06:30.476038 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-223" May 14 18:06:30.476046 kubelet[2802]: E0514 18:06:30.476046 2802 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-223" May 14 18:06:30.476203 kubelet[2802]: I0514 18:06:30.476055 2802 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" May 14 18:06:34.605576 systemd[1]: Started sshd@10-172.236.122.223:22-147.75.109.163:54598.service - OpenSSH per-connection server daemon (147.75.109.163:54598). May 14 18:06:34.955930 sshd[3965]: Accepted publickey for core from 147.75.109.163 port 54598 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:06:34.957824 sshd-session[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:34.963201 systemd-logind[1515]: New session 11 of user core. May 14 18:06:34.967262 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:06:35.290118 sshd[3967]: Connection closed by 147.75.109.163 port 54598 May 14 18:06:35.291217 sshd-session[3965]: pam_unix(sshd:session): session closed for user core May 14 18:06:35.295490 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. May 14 18:06:35.296327 systemd[1]: sshd@10-172.236.122.223:22-147.75.109.163:54598.service: Deactivated successfully. May 14 18:06:35.298549 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:06:35.300291 systemd-logind[1515]: Removed session 11. May 14 18:06:35.353938 systemd[1]: Started sshd@11-172.236.122.223:22-147.75.109.163:54600.service - OpenSSH per-connection server daemon (147.75.109.163:54600). May 14 18:06:35.703762 sshd[3980]: Accepted publickey for core from 147.75.109.163 port 54600 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:06:35.705740 sshd-session[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:35.711176 systemd-logind[1515]: New session 12 of user core. May 14 18:06:35.715258 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:06:36.082946 sshd[3982]: Connection closed by 147.75.109.163 port 54600 May 14 18:06:36.083294 sshd-session[3980]: pam_unix(sshd:session): session closed for user core May 14 18:06:36.089061 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. May 14 18:06:36.089801 systemd[1]: sshd@11-172.236.122.223:22-147.75.109.163:54600.service: Deactivated successfully. May 14 18:06:36.092415 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:06:36.094378 systemd-logind[1515]: Removed session 12. May 14 18:06:36.146654 systemd[1]: Started sshd@12-172.236.122.223:22-147.75.109.163:54604.service - OpenSSH per-connection server daemon (147.75.109.163:54604). May 14 18:06:36.487491 sshd[3992]: Accepted publickey for core from 147.75.109.163 port 54604 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:06:36.487737 sshd-session[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:36.494185 systemd-logind[1515]: New session 13 of user core. May 14 18:06:36.500468 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:06:36.791884 sshd[3994]: Connection closed by 147.75.109.163 port 54604 May 14 18:06:36.792810 sshd-session[3992]: pam_unix(sshd:session): session closed for user core May 14 18:06:36.797848 systemd[1]: sshd@12-172.236.122.223:22-147.75.109.163:54604.service: Deactivated successfully. May 14 18:06:36.800880 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:06:36.801996 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. May 14 18:06:36.804464 systemd-logind[1515]: Removed session 13. May 14 18:06:41.867213 systemd[1]: Started sshd@13-172.236.122.223:22-147.75.109.163:38052.service - OpenSSH per-connection server daemon (147.75.109.163:38052). May 14 18:06:42.226054 sshd[4016]: Accepted publickey for core from 147.75.109.163 port 38052 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:06:42.228488 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:42.246719 systemd-logind[1515]: New session 14 of user core. May 14 18:06:42.252284 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:06:42.563207 sshd[4018]: Connection closed by 147.75.109.163 port 38052 May 14 18:06:42.564747 sshd-session[4016]: pam_unix(sshd:session): session closed for user core May 14 18:06:42.570459 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. May 14 18:06:42.571036 systemd[1]: sshd@13-172.236.122.223:22-147.75.109.163:38052.service: Deactivated successfully. May 14 18:06:42.574588 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:06:42.577821 systemd-logind[1515]: Removed session 14. May 14 18:06:47.630446 systemd[1]: Started sshd@14-172.236.122.223:22-147.75.109.163:38060.service - OpenSSH per-connection server daemon (147.75.109.163:38060). May 14 18:06:47.996538 sshd[4030]: Accepted publickey for core from 147.75.109.163 port 38060 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:06:47.998096 sshd-session[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:48.003532 systemd-logind[1515]: New session 15 of user core. May 14 18:06:48.013282 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:06:48.334597 sshd[4032]: Connection closed by 147.75.109.163 port 38060 May 14 18:06:48.335601 sshd-session[4030]: pam_unix(sshd:session): session closed for user core May 14 18:06:48.341879 systemd[1]: sshd@14-172.236.122.223:22-147.75.109.163:38060.service: Deactivated successfully. May 14 18:06:48.348824 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:06:48.350038 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. May 14 18:06:48.353257 systemd-logind[1515]: Removed session 15. May 14 18:06:48.396163 systemd[1]: Started sshd@15-172.236.122.223:22-147.75.109.163:42200.service - OpenSSH per-connection server daemon (147.75.109.163:42200). May 14 18:06:48.745499 sshd[4044]: Accepted publickey for core from 147.75.109.163 port 42200 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:06:48.747812 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:48.754974 systemd-logind[1515]: New session 16 of user core. May 14 18:06:48.762308 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:06:48.906644 kubelet[2802]: E0514 18:06:48.906582 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:06:49.084119 sshd[4046]: Connection closed by 147.75.109.163 port 42200 May 14 18:06:49.084502 sshd-session[4044]: pam_unix(sshd:session): session closed for user core May 14 18:06:49.090010 systemd[1]: sshd@15-172.236.122.223:22-147.75.109.163:42200.service: Deactivated successfully. May 14 18:06:49.093084 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:06:49.094535 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. May 14 18:06:49.098110 systemd-logind[1515]: Removed session 16. May 14 18:06:49.146217 systemd[1]: Started sshd@16-172.236.122.223:22-147.75.109.163:42208.service - OpenSSH per-connection server daemon (147.75.109.163:42208). May 14 18:06:49.487083 sshd[4056]: Accepted publickey for core from 147.75.109.163 port 42208 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:06:49.488670 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:49.495040 systemd-logind[1515]: New session 17 of user core. May 14 18:06:49.501304 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:06:50.905698 kubelet[2802]: E0514 18:06:50.905589 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:06:51.091772 sshd[4058]: Connection closed by 147.75.109.163 port 42208 May 14 18:06:51.093309 sshd-session[4056]: pam_unix(sshd:session): session closed for user core May 14 18:06:51.096912 systemd-logind[1515]: Session 17 logged out. Waiting for processes to exit. May 14 18:06:51.097049 systemd[1]: sshd@16-172.236.122.223:22-147.75.109.163:42208.service: Deactivated successfully. May 14 18:06:51.101002 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:06:51.104106 systemd-logind[1515]: Removed session 17. May 14 18:06:51.156085 systemd[1]: Started sshd@17-172.236.122.223:22-147.75.109.163:42222.service - OpenSSH per-connection server daemon (147.75.109.163:42222). May 14 18:06:51.491363 sshd[4076]: Accepted publickey for core from 147.75.109.163 port 42222 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:06:51.494608 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:51.501268 systemd-logind[1515]: New session 18 of user core. May 14 18:06:51.510472 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:06:51.912365 sshd[4078]: Connection closed by 147.75.109.163 port 42222 May 14 18:06:51.913267 sshd-session[4076]: pam_unix(sshd:session): session closed for user core May 14 18:06:51.917609 systemd-logind[1515]: Session 18 logged out. Waiting for processes to exit. May 14 18:06:51.918689 systemd[1]: sshd@17-172.236.122.223:22-147.75.109.163:42222.service: Deactivated successfully. May 14 18:06:51.921016 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:06:51.922908 systemd-logind[1515]: Removed session 18. May 14 18:06:51.979017 systemd[1]: Started sshd@18-172.236.122.223:22-147.75.109.163:42226.service - OpenSSH per-connection server daemon (147.75.109.163:42226). May 14 18:06:52.308213 sshd[4088]: Accepted publickey for core from 147.75.109.163 port 42226 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:06:52.309634 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:52.315360 systemd-logind[1515]: New session 19 of user core. May 14 18:06:52.320264 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:06:52.602872 sshd[4090]: Connection closed by 147.75.109.163 port 42226 May 14 18:06:52.603409 sshd-session[4088]: pam_unix(sshd:session): session closed for user core May 14 18:06:52.607645 systemd[1]: sshd@18-172.236.122.223:22-147.75.109.163:42226.service: Deactivated successfully. May 14 18:06:52.611016 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:06:52.612115 systemd-logind[1515]: Session 19 logged out. Waiting for processes to exit. May 14 18:06:52.617127 systemd-logind[1515]: Removed session 19. May 14 18:06:55.906396 kubelet[2802]: E0514 18:06:55.905396 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:06:57.672372 systemd[1]: Started sshd@19-172.236.122.223:22-147.75.109.163:42240.service - OpenSSH per-connection server daemon (147.75.109.163:42240). May 14 18:06:58.000337 sshd[4101]: Accepted publickey for core from 147.75.109.163 port 42240 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:06:58.001972 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:06:58.007875 systemd-logind[1515]: New session 20 of user core. May 14 18:06:58.012243 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:06:58.292126 sshd[4103]: Connection closed by 147.75.109.163 port 42240 May 14 18:06:58.293212 sshd-session[4101]: pam_unix(sshd:session): session closed for user core May 14 18:06:58.298297 systemd[1]: sshd@19-172.236.122.223:22-147.75.109.163:42240.service: Deactivated successfully. May 14 18:06:58.301096 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:06:58.302351 systemd-logind[1515]: Session 20 logged out. Waiting for processes to exit. May 14 18:06:58.304272 systemd-logind[1515]: Removed session 20. May 14 18:06:59.906201 kubelet[2802]: E0514 18:06:59.905933 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:07:03.356162 systemd[1]: Started sshd@20-172.236.122.223:22-147.75.109.163:57888.service - OpenSSH per-connection server daemon (147.75.109.163:57888). May 14 18:07:03.689838 sshd[4118]: Accepted publickey for core from 147.75.109.163 port 57888 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:07:03.691293 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:03.696185 systemd-logind[1515]: New session 21 of user core. May 14 18:07:03.703314 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 18:07:04.019390 sshd[4120]: Connection closed by 147.75.109.163 port 57888 May 14 18:07:04.020208 sshd-session[4118]: pam_unix(sshd:session): session closed for user core May 14 18:07:04.024323 systemd-logind[1515]: Session 21 logged out. Waiting for processes to exit. May 14 18:07:04.025028 systemd[1]: sshd@20-172.236.122.223:22-147.75.109.163:57888.service: Deactivated successfully. May 14 18:07:04.027661 systemd[1]: session-21.scope: Deactivated successfully. May 14 18:07:04.029843 systemd-logind[1515]: Removed session 21. May 14 18:07:09.086257 systemd[1]: Started sshd@21-172.236.122.223:22-147.75.109.163:42962.service - OpenSSH per-connection server daemon (147.75.109.163:42962). May 14 18:07:09.433175 sshd[4134]: Accepted publickey for core from 147.75.109.163 port 42962 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:07:09.434878 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:09.440160 systemd-logind[1515]: New session 22 of user core. May 14 18:07:09.444270 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 18:07:09.741724 sshd[4136]: Connection closed by 147.75.109.163 port 42962 May 14 18:07:09.742775 sshd-session[4134]: pam_unix(sshd:session): session closed for user core May 14 18:07:09.749525 systemd[1]: sshd@21-172.236.122.223:22-147.75.109.163:42962.service: Deactivated successfully. May 14 18:07:09.752946 systemd[1]: session-22.scope: Deactivated successfully. May 14 18:07:09.754642 systemd-logind[1515]: Session 22 logged out. Waiting for processes to exit. May 14 18:07:09.756810 systemd-logind[1515]: Removed session 22. May 14 18:07:10.906349 kubelet[2802]: E0514 18:07:10.906258 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:07:14.817558 systemd[1]: Started sshd@22-172.236.122.223:22-147.75.109.163:42976.service - OpenSSH per-connection server daemon (147.75.109.163:42976). May 14 18:07:15.160035 sshd[4148]: Accepted publickey for core from 147.75.109.163 port 42976 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:07:15.161961 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:15.168609 systemd-logind[1515]: New session 23 of user core. May 14 18:07:15.173438 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 18:07:15.460587 sshd[4150]: Connection closed by 147.75.109.163 port 42976 May 14 18:07:15.461508 sshd-session[4148]: pam_unix(sshd:session): session closed for user core May 14 18:07:15.466489 systemd-logind[1515]: Session 23 logged out. Waiting for processes to exit. May 14 18:07:15.467367 systemd[1]: sshd@22-172.236.122.223:22-147.75.109.163:42976.service: Deactivated successfully. May 14 18:07:15.469773 systemd[1]: session-23.scope: Deactivated successfully. May 14 18:07:15.471815 systemd-logind[1515]: Removed session 23. May 14 18:07:20.525776 systemd[1]: Started sshd@23-172.236.122.223:22-147.75.109.163:38962.service - OpenSSH per-connection server daemon (147.75.109.163:38962). May 14 18:07:20.858725 sshd[4172]: Accepted publickey for core from 147.75.109.163 port 38962 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:07:20.860178 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:20.865196 systemd-logind[1515]: New session 24 of user core. May 14 18:07:20.870252 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 18:07:21.164774 sshd[4174]: Connection closed by 147.75.109.163 port 38962 May 14 18:07:21.165016 sshd-session[4172]: pam_unix(sshd:session): session closed for user core May 14 18:07:21.170483 systemd[1]: sshd@23-172.236.122.223:22-147.75.109.163:38962.service: Deactivated successfully. May 14 18:07:21.173661 systemd[1]: session-24.scope: Deactivated successfully. May 14 18:07:21.174595 systemd-logind[1515]: Session 24 logged out. Waiting for processes to exit. May 14 18:07:21.176350 systemd-logind[1515]: Removed session 24. May 14 18:07:23.906376 kubelet[2802]: E0514 18:07:23.906326 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:07:26.232269 systemd[1]: Started sshd@24-172.236.122.223:22-147.75.109.163:38974.service - OpenSSH per-connection server daemon (147.75.109.163:38974). May 14 18:07:26.582202 sshd[4186]: Accepted publickey for core from 147.75.109.163 port 38974 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:07:26.584057 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:26.590568 systemd-logind[1515]: New session 25 of user core. May 14 18:07:26.596272 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 18:07:26.892851 sshd[4188]: Connection closed by 147.75.109.163 port 38974 May 14 18:07:26.893708 sshd-session[4186]: pam_unix(sshd:session): session closed for user core May 14 18:07:26.899052 systemd[1]: sshd@24-172.236.122.223:22-147.75.109.163:38974.service: Deactivated successfully. May 14 18:07:26.902010 systemd[1]: session-25.scope: Deactivated successfully. May 14 18:07:26.906288 systemd-logind[1515]: Session 25 logged out. Waiting for processes to exit. May 14 18:07:26.907777 systemd-logind[1515]: Removed session 25. May 14 18:07:31.962413 systemd[1]: Started sshd@25-172.236.122.223:22-147.75.109.163:47838.service - OpenSSH per-connection server daemon (147.75.109.163:47838). May 14 18:07:32.298028 sshd[4200]: Accepted publickey for core from 147.75.109.163 port 47838 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:07:32.300238 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:32.307584 systemd-logind[1515]: New session 26 of user core. May 14 18:07:32.312280 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 18:07:32.629720 sshd[4202]: Connection closed by 147.75.109.163 port 47838 May 14 18:07:32.630510 sshd-session[4200]: pam_unix(sshd:session): session closed for user core May 14 18:07:32.635011 systemd-logind[1515]: Session 26 logged out. Waiting for processes to exit. May 14 18:07:32.636040 systemd[1]: sshd@25-172.236.122.223:22-147.75.109.163:47838.service: Deactivated successfully. May 14 18:07:32.638456 systemd[1]: session-26.scope: Deactivated successfully. May 14 18:07:32.640871 systemd-logind[1515]: Removed session 26. May 14 18:07:37.696288 systemd[1]: Started sshd@26-172.236.122.223:22-147.75.109.163:47842.service - OpenSSH per-connection server daemon (147.75.109.163:47842). May 14 18:07:38.048024 sshd[4216]: Accepted publickey for core from 147.75.109.163 port 47842 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:07:38.050122 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:38.056953 systemd-logind[1515]: New session 27 of user core. May 14 18:07:38.064291 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 18:07:38.392617 sshd[4218]: Connection closed by 147.75.109.163 port 47842 May 14 18:07:38.393504 sshd-session[4216]: pam_unix(sshd:session): session closed for user core May 14 18:07:38.398693 systemd[1]: sshd@26-172.236.122.223:22-147.75.109.163:47842.service: Deactivated successfully. May 14 18:07:38.402002 systemd[1]: session-27.scope: Deactivated successfully. May 14 18:07:38.403596 systemd-logind[1515]: Session 27 logged out. Waiting for processes to exit. May 14 18:07:38.405520 systemd-logind[1515]: Removed session 27. May 14 18:07:43.462879 systemd[1]: Started sshd@27-172.236.122.223:22-147.75.109.163:51592.service - OpenSSH per-connection server daemon (147.75.109.163:51592). May 14 18:07:43.812304 sshd[4230]: Accepted publickey for core from 147.75.109.163 port 51592 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:07:43.813879 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:43.819386 systemd-logind[1515]: New session 28 of user core. May 14 18:07:43.824275 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 18:07:44.171760 sshd[4232]: Connection closed by 147.75.109.163 port 51592 May 14 18:07:44.172956 sshd-session[4230]: pam_unix(sshd:session): session closed for user core May 14 18:07:44.179485 systemd[1]: sshd@27-172.236.122.223:22-147.75.109.163:51592.service: Deactivated successfully. May 14 18:07:44.183183 systemd[1]: session-28.scope: Deactivated successfully. May 14 18:07:44.184337 systemd-logind[1515]: Session 28 logged out. Waiting for processes to exit. May 14 18:07:44.186523 systemd-logind[1515]: Removed session 28. May 14 18:07:49.243848 systemd[1]: Started sshd@28-172.236.122.223:22-147.75.109.163:34170.service - OpenSSH per-connection server daemon (147.75.109.163:34170). May 14 18:07:49.574240 sshd[4244]: Accepted publickey for core from 147.75.109.163 port 34170 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:07:49.576170 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:49.581888 systemd-logind[1515]: New session 29 of user core. May 14 18:07:49.587265 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 18:07:49.901289 sshd[4247]: Connection closed by 147.75.109.163 port 34170 May 14 18:07:49.902982 sshd-session[4244]: pam_unix(sshd:session): session closed for user core May 14 18:07:49.912030 systemd[1]: sshd@28-172.236.122.223:22-147.75.109.163:34170.service: Deactivated successfully. May 14 18:07:49.914443 systemd[1]: session-29.scope: Deactivated successfully. May 14 18:07:49.916115 systemd-logind[1515]: Session 29 logged out. Waiting for processes to exit. May 14 18:07:49.918010 systemd-logind[1515]: Removed session 29. May 14 18:07:54.966253 systemd[1]: Started sshd@29-172.236.122.223:22-147.75.109.163:34182.service - OpenSSH per-connection server daemon (147.75.109.163:34182). May 14 18:07:55.296180 sshd[4259]: Accepted publickey for core from 147.75.109.163 port 34182 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:07:55.298231 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:07:55.306876 systemd-logind[1515]: New session 30 of user core. May 14 18:07:55.313289 systemd[1]: Started session-30.scope - Session 30 of User core. May 14 18:07:55.614483 sshd[4262]: Connection closed by 147.75.109.163 port 34182 May 14 18:07:55.615220 sshd-session[4259]: pam_unix(sshd:session): session closed for user core May 14 18:07:55.620171 systemd-logind[1515]: Session 30 logged out. Waiting for processes to exit. May 14 18:07:55.621018 systemd[1]: sshd@29-172.236.122.223:22-147.75.109.163:34182.service: Deactivated successfully. May 14 18:07:55.623557 systemd[1]: session-30.scope: Deactivated successfully. May 14 18:07:55.626310 systemd-logind[1515]: Removed session 30. May 14 18:08:00.678345 systemd[1]: Started sshd@30-172.236.122.223:22-147.75.109.163:56818.service - OpenSSH per-connection server daemon (147.75.109.163:56818). May 14 18:08:01.016295 sshd[4274]: Accepted publickey for core from 147.75.109.163 port 56818 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:08:01.018165 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:01.024195 systemd-logind[1515]: New session 31 of user core. May 14 18:08:01.030280 systemd[1]: Started session-31.scope - Session 31 of User core. May 14 18:08:01.326907 sshd[4276]: Connection closed by 147.75.109.163 port 56818 May 14 18:08:01.328432 sshd-session[4274]: pam_unix(sshd:session): session closed for user core May 14 18:08:01.333432 systemd[1]: sshd@30-172.236.122.223:22-147.75.109.163:56818.service: Deactivated successfully. May 14 18:08:01.336260 systemd[1]: session-31.scope: Deactivated successfully. May 14 18:08:01.338516 systemd-logind[1515]: Session 31 logged out. Waiting for processes to exit. May 14 18:08:01.340482 systemd-logind[1515]: Removed session 31. May 14 18:08:05.906887 kubelet[2802]: E0514 18:08:05.906568 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:08:06.393918 systemd[1]: Started sshd@31-172.236.122.223:22-147.75.109.163:56822.service - OpenSSH per-connection server daemon (147.75.109.163:56822). May 14 18:08:06.737256 sshd[4287]: Accepted publickey for core from 147.75.109.163 port 56822 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:08:06.739060 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:06.745763 systemd-logind[1515]: New session 32 of user core. May 14 18:08:06.750448 systemd[1]: Started session-32.scope - Session 32 of User core. May 14 18:08:07.070219 sshd[4290]: Connection closed by 147.75.109.163 port 56822 May 14 18:08:07.070967 sshd-session[4287]: pam_unix(sshd:session): session closed for user core May 14 18:08:07.075903 systemd-logind[1515]: Session 32 logged out. Waiting for processes to exit. May 14 18:08:07.076805 systemd[1]: sshd@31-172.236.122.223:22-147.75.109.163:56822.service: Deactivated successfully. May 14 18:08:07.082085 systemd[1]: session-32.scope: Deactivated successfully. May 14 18:08:07.084650 systemd-logind[1515]: Removed session 32. May 14 18:08:08.905951 kubelet[2802]: E0514 18:08:08.905878 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:08:10.906106 kubelet[2802]: E0514 18:08:10.906021 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:08:12.143292 systemd[1]: Started sshd@32-172.236.122.223:22-147.75.109.163:34024.service - OpenSSH per-connection server daemon (147.75.109.163:34024). May 14 18:08:12.484083 sshd[4303]: Accepted publickey for core from 147.75.109.163 port 34024 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:08:12.486042 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:12.492366 systemd-logind[1515]: New session 33 of user core. May 14 18:08:12.497283 systemd[1]: Started session-33.scope - Session 33 of User core. May 14 18:08:12.844851 sshd[4305]: Connection closed by 147.75.109.163 port 34024 May 14 18:08:12.845805 sshd-session[4303]: pam_unix(sshd:session): session closed for user core May 14 18:08:12.851010 systemd-logind[1515]: Session 33 logged out. Waiting for processes to exit. May 14 18:08:12.851909 systemd[1]: sshd@32-172.236.122.223:22-147.75.109.163:34024.service: Deactivated successfully. May 14 18:08:12.856161 systemd[1]: session-33.scope: Deactivated successfully. May 14 18:08:12.858256 systemd-logind[1515]: Removed session 33. May 14 18:08:14.906490 kubelet[2802]: E0514 18:08:14.906276 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:08:17.909470 systemd[1]: Started sshd@33-172.236.122.223:22-147.75.109.163:34034.service - OpenSSH per-connection server daemon (147.75.109.163:34034). May 14 18:08:18.244446 sshd[4317]: Accepted publickey for core from 147.75.109.163 port 34034 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:08:18.246331 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:18.252595 systemd-logind[1515]: New session 34 of user core. May 14 18:08:18.260273 systemd[1]: Started session-34.scope - Session 34 of User core. May 14 18:08:18.558386 sshd[4319]: Connection closed by 147.75.109.163 port 34034 May 14 18:08:18.559163 sshd-session[4317]: pam_unix(sshd:session): session closed for user core May 14 18:08:18.565901 systemd[1]: sshd@33-172.236.122.223:22-147.75.109.163:34034.service: Deactivated successfully. May 14 18:08:18.569115 systemd[1]: session-34.scope: Deactivated successfully. May 14 18:08:18.571773 systemd-logind[1515]: Session 34 logged out. Waiting for processes to exit. May 14 18:08:18.574112 systemd-logind[1515]: Removed session 34. May 14 18:08:23.634480 systemd[1]: Started sshd@34-172.236.122.223:22-147.75.109.163:56790.service - OpenSSH per-connection server daemon (147.75.109.163:56790). May 14 18:08:23.979808 sshd[4332]: Accepted publickey for core from 147.75.109.163 port 56790 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:08:23.981401 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:23.987002 systemd-logind[1515]: New session 35 of user core. May 14 18:08:23.992281 systemd[1]: Started session-35.scope - Session 35 of User core. May 14 18:08:24.289740 sshd[4334]: Connection closed by 147.75.109.163 port 56790 May 14 18:08:24.290648 sshd-session[4332]: pam_unix(sshd:session): session closed for user core May 14 18:08:24.295022 systemd[1]: sshd@34-172.236.122.223:22-147.75.109.163:56790.service: Deactivated successfully. May 14 18:08:24.297110 systemd[1]: session-35.scope: Deactivated successfully. May 14 18:08:24.298169 systemd-logind[1515]: Session 35 logged out. Waiting for processes to exit. May 14 18:08:24.299764 systemd-logind[1515]: Removed session 35. May 14 18:08:29.026788 systemd[1]: Started sshd@35-172.236.122.223:22-172.236.228.220:36606.service - OpenSSH per-connection server daemon (172.236.228.220:36606). May 14 18:08:29.353476 systemd[1]: Started sshd@36-172.236.122.223:22-147.75.109.163:55572.service - OpenSSH per-connection server daemon (147.75.109.163:55572). May 14 18:08:29.695104 sshd[4347]: Accepted publickey for core from 147.75.109.163 port 55572 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:08:29.696685 sshd-session[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:29.701607 systemd-logind[1515]: New session 36 of user core. May 14 18:08:29.705279 systemd[1]: Started session-36.scope - Session 36 of User core. May 14 18:08:30.010160 sshd[4350]: Connection closed by 147.75.109.163 port 55572 May 14 18:08:30.011421 sshd-session[4347]: pam_unix(sshd:session): session closed for user core May 14 18:08:30.017940 systemd[1]: sshd@36-172.236.122.223:22-147.75.109.163:55572.service: Deactivated successfully. May 14 18:08:30.018057 systemd-logind[1515]: Session 36 logged out. Waiting for processes to exit. May 14 18:08:30.020851 systemd[1]: session-36.scope: Deactivated successfully. May 14 18:08:30.022991 systemd-logind[1515]: Removed session 36. May 14 18:08:30.478755 sshd[4345]: Connection closed by 172.236.228.220 port 36606 [preauth] May 14 18:08:30.480938 systemd[1]: sshd@35-172.236.122.223:22-172.236.228.220:36606.service: Deactivated successfully. May 14 18:08:30.541936 systemd[1]: Started sshd@37-172.236.122.223:22-172.236.228.220:36618.service - OpenSSH per-connection server daemon (172.236.228.220:36618). May 14 18:08:31.899144 sshd[4363]: Connection closed by 172.236.228.220 port 36618 [preauth] May 14 18:08:31.902151 systemd[1]: sshd@37-172.236.122.223:22-172.236.228.220:36618.service: Deactivated successfully. May 14 18:08:31.991333 systemd[1]: Started sshd@38-172.236.122.223:22-172.236.228.220:36626.service - OpenSSH per-connection server daemon (172.236.228.220:36626). May 14 18:08:32.906259 kubelet[2802]: E0514 18:08:32.906188 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:08:33.414333 sshd[4368]: Connection closed by 172.236.228.220 port 36626 [preauth] May 14 18:08:33.416068 systemd[1]: sshd@38-172.236.122.223:22-172.236.228.220:36626.service: Deactivated successfully. May 14 18:08:35.085385 systemd[1]: Started sshd@39-172.236.122.223:22-147.75.109.163:55588.service - OpenSSH per-connection server daemon (147.75.109.163:55588). May 14 18:08:35.410932 sshd[4373]: Accepted publickey for core from 147.75.109.163 port 55588 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:08:35.412439 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:35.418437 systemd-logind[1515]: New session 37 of user core. May 14 18:08:35.423259 systemd[1]: Started session-37.scope - Session 37 of User core. May 14 18:08:35.733610 sshd[4375]: Connection closed by 147.75.109.163 port 55588 May 14 18:08:35.735388 sshd-session[4373]: pam_unix(sshd:session): session closed for user core May 14 18:08:35.740008 systemd-logind[1515]: Session 37 logged out. Waiting for processes to exit. May 14 18:08:35.741655 systemd[1]: sshd@39-172.236.122.223:22-147.75.109.163:55588.service: Deactivated successfully. May 14 18:08:35.744666 systemd[1]: session-37.scope: Deactivated successfully. May 14 18:08:35.746849 systemd-logind[1515]: Removed session 37. May 14 18:08:39.911177 kubelet[2802]: E0514 18:08:39.907926 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:08:40.795959 systemd[1]: Started sshd@40-172.236.122.223:22-147.75.109.163:55804.service - OpenSSH per-connection server daemon (147.75.109.163:55804). May 14 18:08:41.131399 sshd[4388]: Accepted publickey for core from 147.75.109.163 port 55804 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:08:41.133223 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:41.138598 systemd-logind[1515]: New session 38 of user core. May 14 18:08:41.150361 systemd[1]: Started session-38.scope - Session 38 of User core. May 14 18:08:41.449913 sshd[4390]: Connection closed by 147.75.109.163 port 55804 May 14 18:08:41.450983 sshd-session[4388]: pam_unix(sshd:session): session closed for user core May 14 18:08:41.456271 systemd[1]: sshd@40-172.236.122.223:22-147.75.109.163:55804.service: Deactivated successfully. May 14 18:08:41.459364 systemd[1]: session-38.scope: Deactivated successfully. May 14 18:08:41.460293 systemd-logind[1515]: Session 38 logged out. Waiting for processes to exit. May 14 18:08:41.462986 systemd-logind[1515]: Removed session 38. May 14 18:08:46.514824 systemd[1]: Started sshd@41-172.236.122.223:22-147.75.109.163:55812.service - OpenSSH per-connection server daemon (147.75.109.163:55812). May 14 18:08:46.863547 sshd[4403]: Accepted publickey for core from 147.75.109.163 port 55812 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:08:46.865395 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:46.870085 systemd-logind[1515]: New session 39 of user core. May 14 18:08:46.878278 systemd[1]: Started session-39.scope - Session 39 of User core. May 14 18:08:47.180987 sshd[4405]: Connection closed by 147.75.109.163 port 55812 May 14 18:08:47.182389 sshd-session[4403]: pam_unix(sshd:session): session closed for user core May 14 18:08:47.187589 systemd[1]: sshd@41-172.236.122.223:22-147.75.109.163:55812.service: Deactivated successfully. May 14 18:08:47.190641 systemd[1]: session-39.scope: Deactivated successfully. May 14 18:08:47.191882 systemd-logind[1515]: Session 39 logged out. Waiting for processes to exit. May 14 18:08:47.193768 systemd-logind[1515]: Removed session 39. May 14 18:08:52.243996 systemd[1]: Started sshd@42-172.236.122.223:22-147.75.109.163:60492.service - OpenSSH per-connection server daemon (147.75.109.163:60492). May 14 18:08:52.588928 sshd[4417]: Accepted publickey for core from 147.75.109.163 port 60492 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:08:52.590604 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:52.595938 systemd-logind[1515]: New session 40 of user core. May 14 18:08:52.601263 systemd[1]: Started session-40.scope - Session 40 of User core. May 14 18:08:52.890554 sshd[4419]: Connection closed by 147.75.109.163 port 60492 May 14 18:08:52.892380 sshd-session[4417]: pam_unix(sshd:session): session closed for user core May 14 18:08:52.898310 systemd[1]: sshd@42-172.236.122.223:22-147.75.109.163:60492.service: Deactivated successfully. May 14 18:08:52.900935 systemd[1]: session-40.scope: Deactivated successfully. May 14 18:08:52.902223 systemd-logind[1515]: Session 40 logged out. Waiting for processes to exit. May 14 18:08:52.904052 systemd-logind[1515]: Removed session 40. May 14 18:08:57.955290 systemd[1]: Started sshd@43-172.236.122.223:22-147.75.109.163:60504.service - OpenSSH per-connection server daemon (147.75.109.163:60504). May 14 18:08:58.298177 sshd[4431]: Accepted publickey for core from 147.75.109.163 port 60504 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:08:58.300042 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:08:58.305461 systemd-logind[1515]: New session 41 of user core. May 14 18:08:58.311262 systemd[1]: Started session-41.scope - Session 41 of User core. May 14 18:08:58.606241 sshd[4433]: Connection closed by 147.75.109.163 port 60504 May 14 18:08:58.607373 sshd-session[4431]: pam_unix(sshd:session): session closed for user core May 14 18:08:58.611955 systemd[1]: sshd@43-172.236.122.223:22-147.75.109.163:60504.service: Deactivated successfully. May 14 18:08:58.614008 systemd[1]: session-41.scope: Deactivated successfully. May 14 18:08:58.614937 systemd-logind[1515]: Session 41 logged out. Waiting for processes to exit. May 14 18:08:58.616705 systemd-logind[1515]: Removed session 41. May 14 18:09:03.676742 systemd[1]: Started sshd@44-172.236.122.223:22-147.75.109.163:59428.service - OpenSSH per-connection server daemon (147.75.109.163:59428). May 14 18:09:04.032235 sshd[4444]: Accepted publickey for core from 147.75.109.163 port 59428 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:09:04.034180 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:04.040742 systemd-logind[1515]: New session 42 of user core. May 14 18:09:04.050297 systemd[1]: Started session-42.scope - Session 42 of User core. May 14 18:09:04.342337 sshd[4446]: Connection closed by 147.75.109.163 port 59428 May 14 18:09:04.343450 sshd-session[4444]: pam_unix(sshd:session): session closed for user core May 14 18:09:04.349122 systemd[1]: sshd@44-172.236.122.223:22-147.75.109.163:59428.service: Deactivated successfully. May 14 18:09:04.352063 systemd[1]: session-42.scope: Deactivated successfully. May 14 18:09:04.353978 systemd-logind[1515]: Session 42 logged out. Waiting for processes to exit. May 14 18:09:04.355594 systemd-logind[1515]: Removed session 42. May 14 18:09:09.413286 systemd[1]: Started sshd@45-172.236.122.223:22-147.75.109.163:40926.service - OpenSSH per-connection server daemon (147.75.109.163:40926). May 14 18:09:09.740822 sshd[4460]: Accepted publickey for core from 147.75.109.163 port 40926 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:09:09.743107 sshd-session[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:09.748095 systemd-logind[1515]: New session 43 of user core. May 14 18:09:09.755365 systemd[1]: Started session-43.scope - Session 43 of User core. May 14 18:09:10.044408 sshd[4462]: Connection closed by 147.75.109.163 port 40926 May 14 18:09:10.046363 sshd-session[4460]: pam_unix(sshd:session): session closed for user core May 14 18:09:10.051347 systemd-logind[1515]: Session 43 logged out. Waiting for processes to exit. May 14 18:09:10.052196 systemd[1]: sshd@45-172.236.122.223:22-147.75.109.163:40926.service: Deactivated successfully. May 14 18:09:10.054279 systemd[1]: session-43.scope: Deactivated successfully. May 14 18:09:10.056207 systemd-logind[1515]: Removed session 43. May 14 18:09:10.905563 kubelet[2802]: E0514 18:09:10.905509 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:09:14.598665 containerd[1533]: time="2025-05-14T18:09:14.598516011Z" level=warning msg="container event discarded" container=0d5b067183874fa441a105b1bd4ba2d2699ec6b6df4c8c99702262023b8c012d type=CONTAINER_CREATED_EVENT May 14 18:09:14.609849 containerd[1533]: time="2025-05-14T18:09:14.609788032Z" level=warning msg="container event discarded" container=0d5b067183874fa441a105b1bd4ba2d2699ec6b6df4c8c99702262023b8c012d type=CONTAINER_STARTED_EVENT May 14 18:09:14.627127 containerd[1533]: time="2025-05-14T18:09:14.627064310Z" level=warning msg="container event discarded" container=4bc8ca91417b028bd626682f0b0e3f9343ba1ca9c8be769ba1f3acf25fa20d5f type=CONTAINER_CREATED_EVENT May 14 18:09:14.627127 containerd[1533]: time="2025-05-14T18:09:14.627121730Z" level=warning msg="container event discarded" container=4bc8ca91417b028bd626682f0b0e3f9343ba1ca9c8be769ba1f3acf25fa20d5f type=CONTAINER_STARTED_EVENT May 14 18:09:14.627307 containerd[1533]: time="2025-05-14T18:09:14.627130840Z" level=warning msg="container event discarded" container=4f2e824a8b51bbff814c3da5224754a8d26fa9a954065d942bee4a5154e0f658 type=CONTAINER_CREATED_EVENT May 14 18:09:14.627307 containerd[1533]: time="2025-05-14T18:09:14.627155260Z" level=warning msg="container event discarded" container=4f2e824a8b51bbff814c3da5224754a8d26fa9a954065d942bee4a5154e0f658 type=CONTAINER_STARTED_EVENT May 14 18:09:14.642468 containerd[1533]: time="2025-05-14T18:09:14.642434160Z" level=warning msg="container event discarded" container=8d04619c052d6293d37b47b7f1d784cfc10d2ed42c8cb09048963f4157415756 type=CONTAINER_CREATED_EVENT May 14 18:09:14.642468 containerd[1533]: time="2025-05-14T18:09:14.642462759Z" level=warning msg="container event discarded" container=ffbced1cf00b3ec19ebb4bbc042a7b92a9626d9627431fa50bca15228efba456 type=CONTAINER_CREATED_EVENT May 14 18:09:14.657759 containerd[1533]: time="2025-05-14T18:09:14.657672389Z" level=warning msg="container event discarded" container=fec6fcbadb466210bc475defb9b9a445e6741349b574d8f723db72e08b6b6fd8 type=CONTAINER_CREATED_EVENT May 14 18:09:14.773338 containerd[1533]: time="2025-05-14T18:09:14.773261321Z" level=warning msg="container event discarded" container=ffbced1cf00b3ec19ebb4bbc042a7b92a9626d9627431fa50bca15228efba456 type=CONTAINER_STARTED_EVENT May 14 18:09:14.787472 containerd[1533]: time="2025-05-14T18:09:14.787437796Z" level=warning msg="container event discarded" container=8d04619c052d6293d37b47b7f1d784cfc10d2ed42c8cb09048963f4157415756 type=CONTAINER_STARTED_EVENT May 14 18:09:14.831870 containerd[1533]: time="2025-05-14T18:09:14.831794422Z" level=warning msg="container event discarded" container=fec6fcbadb466210bc475defb9b9a445e6741349b574d8f723db72e08b6b6fd8 type=CONTAINER_STARTED_EVENT May 14 18:09:15.111338 systemd[1]: Started sshd@46-172.236.122.223:22-147.75.109.163:40938.service - OpenSSH per-connection server daemon (147.75.109.163:40938). May 14 18:09:15.459295 sshd[4474]: Accepted publickey for core from 147.75.109.163 port 40938 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:09:15.460509 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:15.466248 systemd-logind[1515]: New session 44 of user core. May 14 18:09:15.471296 systemd[1]: Started session-44.scope - Session 44 of User core. May 14 18:09:15.771893 sshd[4476]: Connection closed by 147.75.109.163 port 40938 May 14 18:09:15.773392 sshd-session[4474]: pam_unix(sshd:session): session closed for user core May 14 18:09:15.778682 systemd-logind[1515]: Session 44 logged out. Waiting for processes to exit. May 14 18:09:15.779280 systemd[1]: sshd@46-172.236.122.223:22-147.75.109.163:40938.service: Deactivated successfully. May 14 18:09:15.781965 systemd[1]: session-44.scope: Deactivated successfully. May 14 18:09:15.785098 systemd-logind[1515]: Removed session 44. May 14 18:09:17.906892 kubelet[2802]: E0514 18:09:17.906208 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:09:20.838528 systemd[1]: Started sshd@47-172.236.122.223:22-147.75.109.163:41568.service - OpenSSH per-connection server daemon (147.75.109.163:41568). May 14 18:09:21.175903 sshd[4490]: Accepted publickey for core from 147.75.109.163 port 41568 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:09:21.178224 sshd-session[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:21.184776 systemd-logind[1515]: New session 45 of user core. May 14 18:09:21.190307 systemd[1]: Started session-45.scope - Session 45 of User core. May 14 18:09:21.480773 sshd[4492]: Connection closed by 147.75.109.163 port 41568 May 14 18:09:21.481635 sshd-session[4490]: pam_unix(sshd:session): session closed for user core May 14 18:09:21.488975 systemd-logind[1515]: Session 45 logged out. Waiting for processes to exit. May 14 18:09:21.489929 systemd[1]: sshd@47-172.236.122.223:22-147.75.109.163:41568.service: Deactivated successfully. May 14 18:09:21.493021 systemd[1]: session-45.scope: Deactivated successfully. May 14 18:09:21.496604 systemd-logind[1515]: Removed session 45. May 14 18:09:25.905715 kubelet[2802]: E0514 18:09:25.905247 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:09:26.547317 systemd[1]: Started sshd@48-172.236.122.223:22-147.75.109.163:41580.service - OpenSSH per-connection server daemon (147.75.109.163:41580). May 14 18:09:26.899781 sshd[4506]: Accepted publickey for core from 147.75.109.163 port 41580 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:09:26.901809 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:26.907122 systemd-logind[1515]: New session 46 of user core. May 14 18:09:26.914285 systemd[1]: Started session-46.scope - Session 46 of User core. May 14 18:09:27.237108 sshd[4508]: Connection closed by 147.75.109.163 port 41580 May 14 18:09:27.238544 sshd-session[4506]: pam_unix(sshd:session): session closed for user core May 14 18:09:27.244713 systemd[1]: sshd@48-172.236.122.223:22-147.75.109.163:41580.service: Deactivated successfully. May 14 18:09:27.247642 systemd[1]: session-46.scope: Deactivated successfully. May 14 18:09:27.249926 systemd-logind[1515]: Session 46 logged out. Waiting for processes to exit. May 14 18:09:27.251408 systemd-logind[1515]: Removed session 46. May 14 18:09:32.304440 systemd[1]: Started sshd@49-172.236.122.223:22-147.75.109.163:42136.service - OpenSSH per-connection server daemon (147.75.109.163:42136). May 14 18:09:32.657851 sshd[4520]: Accepted publickey for core from 147.75.109.163 port 42136 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:09:32.660333 sshd-session[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:32.667177 systemd-logind[1515]: New session 47 of user core. May 14 18:09:32.673308 systemd[1]: Started session-47.scope - Session 47 of User core. May 14 18:09:32.996745 sshd[4522]: Connection closed by 147.75.109.163 port 42136 May 14 18:09:32.997774 sshd-session[4520]: pam_unix(sshd:session): session closed for user core May 14 18:09:33.003289 systemd-logind[1515]: Session 47 logged out. Waiting for processes to exit. May 14 18:09:33.003587 systemd[1]: sshd@49-172.236.122.223:22-147.75.109.163:42136.service: Deactivated successfully. May 14 18:09:33.006359 systemd[1]: session-47.scope: Deactivated successfully. May 14 18:09:33.008800 systemd-logind[1515]: Removed session 47. May 14 18:09:35.906790 kubelet[2802]: E0514 18:09:35.906531 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:09:36.482299 containerd[1533]: time="2025-05-14T18:09:36.482157586Z" level=warning msg="container event discarded" container=eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7 type=CONTAINER_CREATED_EVENT May 14 18:09:36.482299 containerd[1533]: time="2025-05-14T18:09:36.482258926Z" level=warning msg="container event discarded" container=eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7 type=CONTAINER_STARTED_EVENT May 14 18:09:36.724643 containerd[1533]: time="2025-05-14T18:09:36.724564760Z" level=warning msg="container event discarded" container=3063a4e48a45abe94fde5811c049b5fc7d7bf4e0fafd026aa322413858ff0092 type=CONTAINER_CREATED_EVENT May 14 18:09:36.724643 containerd[1533]: time="2025-05-14T18:09:36.724631010Z" level=warning msg="container event discarded" container=3063a4e48a45abe94fde5811c049b5fc7d7bf4e0fafd026aa322413858ff0092 type=CONTAINER_STARTED_EVENT May 14 18:09:36.724643 containerd[1533]: time="2025-05-14T18:09:36.724640900Z" level=warning msg="container event discarded" container=24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435 type=CONTAINER_CREATED_EVENT May 14 18:09:36.724643 containerd[1533]: time="2025-05-14T18:09:36.724647940Z" level=warning msg="container event discarded" container=24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435 type=CONTAINER_STARTED_EVENT May 14 18:09:36.754035 containerd[1533]: time="2025-05-14T18:09:36.753869424Z" level=warning msg="container event discarded" container=02089c1eb97d5c61d3b7c714b657d5538a1e739eb72dafb1ea5d6b1ce90753ce type=CONTAINER_CREATED_EVENT May 14 18:09:36.825384 containerd[1533]: time="2025-05-14T18:09:36.825316316Z" level=warning msg="container event discarded" container=02089c1eb97d5c61d3b7c714b657d5538a1e739eb72dafb1ea5d6b1ce90753ce type=CONTAINER_STARTED_EVENT May 14 18:09:38.059641 systemd[1]: Started sshd@50-172.236.122.223:22-147.75.109.163:42150.service - OpenSSH per-connection server daemon (147.75.109.163:42150). May 14 18:09:38.404168 sshd[4538]: Accepted publickey for core from 147.75.109.163 port 42150 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:09:38.405543 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:38.411065 systemd-logind[1515]: New session 48 of user core. May 14 18:09:38.416270 systemd[1]: Started session-48.scope - Session 48 of User core. May 14 18:09:38.772506 sshd[4540]: Connection closed by 147.75.109.163 port 42150 May 14 18:09:38.773410 sshd-session[4538]: pam_unix(sshd:session): session closed for user core May 14 18:09:38.778226 systemd-logind[1515]: Session 48 logged out. Waiting for processes to exit. May 14 18:09:38.779288 systemd[1]: sshd@50-172.236.122.223:22-147.75.109.163:42150.service: Deactivated successfully. May 14 18:09:38.782749 systemd[1]: session-48.scope: Deactivated successfully. May 14 18:09:38.785097 systemd-logind[1515]: Removed session 48. May 14 18:09:39.040208 containerd[1533]: time="2025-05-14T18:09:39.039674374Z" level=warning msg="container event discarded" container=a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664 type=CONTAINER_CREATED_EVENT May 14 18:09:39.271339 containerd[1533]: time="2025-05-14T18:09:39.271274389Z" level=warning msg="container event discarded" container=a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664 type=CONTAINER_STARTED_EVENT May 14 18:09:43.835550 systemd[1]: Started sshd@51-172.236.122.223:22-147.75.109.163:49256.service - OpenSSH per-connection server daemon (147.75.109.163:49256). May 14 18:09:44.180850 sshd[4553]: Accepted publickey for core from 147.75.109.163 port 49256 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:09:44.182500 sshd-session[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:44.188184 systemd-logind[1515]: New session 49 of user core. May 14 18:09:44.193261 systemd[1]: Started session-49.scope - Session 49 of User core. May 14 18:09:44.493655 sshd[4555]: Connection closed by 147.75.109.163 port 49256 May 14 18:09:44.494671 sshd-session[4553]: pam_unix(sshd:session): session closed for user core May 14 18:09:44.499914 systemd-logind[1515]: Session 49 logged out. Waiting for processes to exit. May 14 18:09:44.501473 systemd[1]: sshd@51-172.236.122.223:22-147.75.109.163:49256.service: Deactivated successfully. May 14 18:09:44.503833 systemd[1]: session-49.scope: Deactivated successfully. May 14 18:09:44.505686 systemd-logind[1515]: Removed session 49. May 14 18:09:48.932316 containerd[1533]: time="2025-05-14T18:09:48.932178807Z" level=warning msg="container event discarded" container=df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5 type=CONTAINER_CREATED_EVENT May 14 18:09:49.011389 containerd[1533]: time="2025-05-14T18:09:49.011291548Z" level=warning msg="container event discarded" container=df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5 type=CONTAINER_STARTED_EVENT May 14 18:09:49.087691 containerd[1533]: time="2025-05-14T18:09:49.087596931Z" level=warning msg="container event discarded" container=df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5 type=CONTAINER_STOPPED_EVENT May 14 18:09:49.451828 containerd[1533]: time="2025-05-14T18:09:49.451726255Z" level=warning msg="container event discarded" container=a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9 type=CONTAINER_CREATED_EVENT May 14 18:09:49.522155 containerd[1533]: time="2025-05-14T18:09:49.522043172Z" level=warning msg="container event discarded" container=a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9 type=CONTAINER_STARTED_EVENT May 14 18:09:49.558984 systemd[1]: Started sshd@52-172.236.122.223:22-147.75.109.163:56972.service - OpenSSH per-connection server daemon (147.75.109.163:56972). May 14 18:09:49.599658 containerd[1533]: time="2025-05-14T18:09:49.599578250Z" level=warning msg="container event discarded" container=a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9 type=CONTAINER_STOPPED_EVENT May 14 18:09:49.906234 sshd[4567]: Accepted publickey for core from 147.75.109.163 port 56972 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:09:49.909047 sshd-session[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:49.914502 systemd-logind[1515]: New session 50 of user core. May 14 18:09:49.919309 systemd[1]: Started session-50.scope - Session 50 of User core. May 14 18:09:50.214637 sshd[4569]: Connection closed by 147.75.109.163 port 56972 May 14 18:09:50.215573 sshd-session[4567]: pam_unix(sshd:session): session closed for user core May 14 18:09:50.220716 systemd[1]: sshd@52-172.236.122.223:22-147.75.109.163:56972.service: Deactivated successfully. May 14 18:09:50.223124 systemd[1]: session-50.scope: Deactivated successfully. May 14 18:09:50.224307 systemd-logind[1515]: Session 50 logged out. Waiting for processes to exit. May 14 18:09:50.225726 systemd-logind[1515]: Removed session 50. May 14 18:09:50.463443 containerd[1533]: time="2025-05-14T18:09:50.463363043Z" level=warning msg="container event discarded" container=06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc type=CONTAINER_CREATED_EVENT May 14 18:09:50.535423 containerd[1533]: time="2025-05-14T18:09:50.535227667Z" level=warning msg="container event discarded" container=06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc type=CONTAINER_STARTED_EVENT May 14 18:09:50.599072 containerd[1533]: time="2025-05-14T18:09:50.598798361Z" level=warning msg="container event discarded" container=06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc type=CONTAINER_STOPPED_EVENT May 14 18:09:51.472396 containerd[1533]: time="2025-05-14T18:09:51.472308543Z" level=warning msg="container event discarded" container=32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd type=CONTAINER_CREATED_EVENT May 14 18:09:51.552154 containerd[1533]: time="2025-05-14T18:09:51.552071827Z" level=warning msg="container event discarded" container=32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd type=CONTAINER_STARTED_EVENT May 14 18:09:51.574349 containerd[1533]: time="2025-05-14T18:09:51.574317142Z" level=warning msg="container event discarded" container=32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd type=CONTAINER_STOPPED_EVENT May 14 18:09:52.476466 containerd[1533]: time="2025-05-14T18:09:52.476335481Z" level=warning msg="container event discarded" container=c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf type=CONTAINER_CREATED_EVENT May 14 18:09:52.541735 containerd[1533]: time="2025-05-14T18:09:52.541670163Z" level=warning msg="container event discarded" container=c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf type=CONTAINER_STARTED_EVENT May 14 18:09:55.283771 systemd[1]: Started sshd@53-172.236.122.223:22-147.75.109.163:56988.service - OpenSSH per-connection server daemon (147.75.109.163:56988). May 14 18:09:55.621065 sshd[4581]: Accepted publickey for core from 147.75.109.163 port 56988 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:09:55.623200 sshd-session[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:09:55.630017 systemd-logind[1515]: New session 51 of user core. May 14 18:09:55.637305 systemd[1]: Started session-51.scope - Session 51 of User core. May 14 18:09:55.934723 sshd[4583]: Connection closed by 147.75.109.163 port 56988 May 14 18:09:55.936229 sshd-session[4581]: pam_unix(sshd:session): session closed for user core May 14 18:09:55.944277 systemd[1]: sshd@53-172.236.122.223:22-147.75.109.163:56988.service: Deactivated successfully. May 14 18:09:55.947498 systemd[1]: session-51.scope: Deactivated successfully. May 14 18:09:55.948985 systemd-logind[1515]: Session 51 logged out. Waiting for processes to exit. May 14 18:09:55.950948 systemd-logind[1515]: Removed session 51. May 14 18:09:56.905653 kubelet[2802]: E0514 18:09:56.905608 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:01.007257 systemd[1]: Started sshd@54-172.236.122.223:22-147.75.109.163:55984.service - OpenSSH per-connection server daemon (147.75.109.163:55984). May 14 18:10:01.367493 sshd[4595]: Accepted publickey for core from 147.75.109.163 port 55984 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:10:01.369514 sshd-session[4595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:01.377355 systemd-logind[1515]: New session 52 of user core. May 14 18:10:01.381363 systemd[1]: Started session-52.scope - Session 52 of User core. May 14 18:10:01.696311 sshd[4597]: Connection closed by 147.75.109.163 port 55984 May 14 18:10:01.697367 sshd-session[4595]: pam_unix(sshd:session): session closed for user core May 14 18:10:01.702151 systemd-logind[1515]: Session 52 logged out. Waiting for processes to exit. May 14 18:10:01.703465 systemd[1]: sshd@54-172.236.122.223:22-147.75.109.163:55984.service: Deactivated successfully. May 14 18:10:01.706085 systemd[1]: session-52.scope: Deactivated successfully. May 14 18:10:01.708674 systemd-logind[1515]: Removed session 52. May 14 18:10:02.906167 kubelet[2802]: E0514 18:10:02.906104 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:06.756294 systemd[1]: Started sshd@55-172.236.122.223:22-147.75.109.163:55996.service - OpenSSH per-connection server daemon (147.75.109.163:55996). May 14 18:10:07.100429 sshd[4609]: Accepted publickey for core from 147.75.109.163 port 55996 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:10:07.102339 sshd-session[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:07.107064 systemd-logind[1515]: New session 53 of user core. May 14 18:10:07.118273 systemd[1]: Started session-53.scope - Session 53 of User core. May 14 18:10:07.412025 sshd[4613]: Connection closed by 147.75.109.163 port 55996 May 14 18:10:07.413300 sshd-session[4609]: pam_unix(sshd:session): session closed for user core May 14 18:10:07.419075 systemd-logind[1515]: Session 53 logged out. Waiting for processes to exit. May 14 18:10:07.420116 systemd[1]: sshd@55-172.236.122.223:22-147.75.109.163:55996.service: Deactivated successfully. May 14 18:10:07.422701 systemd[1]: session-53.scope: Deactivated successfully. May 14 18:10:07.425069 systemd-logind[1515]: Removed session 53. May 14 18:10:12.479649 systemd[1]: Started sshd@56-172.236.122.223:22-147.75.109.163:37748.service - OpenSSH per-connection server daemon (147.75.109.163:37748). May 14 18:10:12.830688 sshd[4624]: Accepted publickey for core from 147.75.109.163 port 37748 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:10:12.832476 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:12.838380 systemd-logind[1515]: New session 54 of user core. May 14 18:10:12.840287 systemd[1]: Started session-54.scope - Session 54 of User core. May 14 18:10:13.133915 sshd[4626]: Connection closed by 147.75.109.163 port 37748 May 14 18:10:13.135034 sshd-session[4624]: pam_unix(sshd:session): session closed for user core May 14 18:10:13.140362 systemd[1]: sshd@56-172.236.122.223:22-147.75.109.163:37748.service: Deactivated successfully. May 14 18:10:13.143744 systemd[1]: session-54.scope: Deactivated successfully. May 14 18:10:13.145213 systemd-logind[1515]: Session 54 logged out. Waiting for processes to exit. May 14 18:10:13.146733 systemd-logind[1515]: Removed session 54. May 14 18:10:13.197730 systemd[1]: Started sshd@57-172.236.122.223:22-147.75.109.163:37756.service - OpenSSH per-connection server daemon (147.75.109.163:37756). May 14 18:10:13.548713 sshd[4638]: Accepted publickey for core from 147.75.109.163 port 37756 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:10:13.550586 sshd-session[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:13.558207 systemd-logind[1515]: New session 55 of user core. May 14 18:10:13.563281 systemd[1]: Started session-55.scope - Session 55 of User core. May 14 18:10:15.104942 containerd[1533]: time="2025-05-14T18:10:15.104775348Z" level=info msg="StopContainer for \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\" with timeout 30 (s)" May 14 18:10:15.107127 containerd[1533]: time="2025-05-14T18:10:15.106736922Z" level=info msg="Stop container \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\" with signal terminated" May 14 18:10:15.125823 systemd[1]: cri-containerd-a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664.scope: Deactivated successfully. May 14 18:10:15.132066 containerd[1533]: time="2025-05-14T18:10:15.131992059Z" level=info msg="received exit event container_id:\"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\" id:\"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\" pid:3220 exited_at:{seconds:1747246215 nanos:131309622}" May 14 18:10:15.132540 containerd[1533]: time="2025-05-14T18:10:15.132123189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\" id:\"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\" pid:3220 exited_at:{seconds:1747246215 nanos:131309622}" May 14 18:10:15.163580 containerd[1533]: time="2025-05-14T18:10:15.163496537Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:10:15.172781 containerd[1533]: time="2025-05-14T18:10:15.172750047Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\" id:\"07bb6b4000d39473308e350ec156156dd5a53322d0ba46c8a083f0dfbf16af55\" pid:4668 exited_at:{seconds:1747246215 nanos:172434418}" May 14 18:10:15.179782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664-rootfs.mount: Deactivated successfully. May 14 18:10:15.181247 containerd[1533]: time="2025-05-14T18:10:15.180682421Z" level=info msg="StopContainer for \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\" with timeout 2 (s)" May 14 18:10:15.181843 containerd[1533]: time="2025-05-14T18:10:15.181457868Z" level=info msg="Stop container \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\" with signal terminated" May 14 18:10:15.193233 containerd[1533]: time="2025-05-14T18:10:15.193191340Z" level=info msg="StopContainer for \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\" returns successfully" May 14 18:10:15.196167 containerd[1533]: time="2025-05-14T18:10:15.195513132Z" level=info msg="StopPodSandbox for \"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\"" May 14 18:10:15.196167 containerd[1533]: time="2025-05-14T18:10:15.195595542Z" level=info msg="Container to stop \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:15.202220 systemd-networkd[1457]: lxc_health: Link DOWN May 14 18:10:15.202235 systemd-networkd[1457]: lxc_health: Lost carrier May 14 18:10:15.222440 systemd[1]: cri-containerd-c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf.scope: Deactivated successfully. May 14 18:10:15.222784 systemd[1]: cri-containerd-c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf.scope: Consumed 8.018s CPU time, 123.5M memory peak, 128K read from disk, 13.3M written to disk. May 14 18:10:15.225982 containerd[1533]: time="2025-05-14T18:10:15.225122276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\" id:\"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\" pid:3452 exited_at:{seconds:1747246215 nanos:224742767}" May 14 18:10:15.225982 containerd[1533]: time="2025-05-14T18:10:15.225332935Z" level=info msg="received exit event container_id:\"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\" id:\"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\" pid:3452 exited_at:{seconds:1747246215 nanos:224742767}" May 14 18:10:15.228122 kubelet[2802]: E0514 18:10:15.228037 2802 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 18:10:15.261773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf-rootfs.mount: Deactivated successfully. May 14 18:10:15.265099 systemd[1]: cri-containerd-eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7.scope: Deactivated successfully. May 14 18:10:15.270405 containerd[1533]: time="2025-05-14T18:10:15.270204959Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\" id:\"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\" pid:2929 exit_status:137 exited_at:{seconds:1747246215 nanos:268545164}" May 14 18:10:15.282078 containerd[1533]: time="2025-05-14T18:10:15.281681771Z" level=info msg="StopContainer for \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\" returns successfully" May 14 18:10:15.282330 containerd[1533]: time="2025-05-14T18:10:15.282301609Z" level=info msg="StopPodSandbox for \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\"" May 14 18:10:15.282466 containerd[1533]: time="2025-05-14T18:10:15.282414039Z" level=info msg="Container to stop \"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:15.282466 containerd[1533]: time="2025-05-14T18:10:15.282452169Z" level=info msg="Container to stop \"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:15.282556 containerd[1533]: time="2025-05-14T18:10:15.282468709Z" level=info msg="Container to stop \"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:15.282556 containerd[1533]: time="2025-05-14T18:10:15.282477779Z" level=info msg="Container to stop \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:15.282556 containerd[1533]: time="2025-05-14T18:10:15.282486049Z" level=info msg="Container to stop \"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 18:10:15.293420 systemd[1]: cri-containerd-24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435.scope: Deactivated successfully. May 14 18:10:15.343062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435-rootfs.mount: Deactivated successfully. May 14 18:10:15.347011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7-rootfs.mount: Deactivated successfully. May 14 18:10:15.350209 containerd[1533]: time="2025-05-14T18:10:15.350166188Z" level=info msg="shim disconnected" id=24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435 namespace=k8s.io May 14 18:10:15.350209 containerd[1533]: time="2025-05-14T18:10:15.350204948Z" level=warning msg="cleaning up after shim disconnected" id=24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435 namespace=k8s.io May 14 18:10:15.350387 containerd[1533]: time="2025-05-14T18:10:15.350213898Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:10:15.350550 containerd[1533]: time="2025-05-14T18:10:15.350474557Z" level=info msg="shim disconnected" id=eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7 namespace=k8s.io May 14 18:10:15.350550 containerd[1533]: time="2025-05-14T18:10:15.350498677Z" level=warning msg="cleaning up after shim disconnected" id=eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7 namespace=k8s.io May 14 18:10:15.350550 containerd[1533]: time="2025-05-14T18:10:15.350506117Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 18:10:15.377917 containerd[1533]: time="2025-05-14T18:10:15.375565305Z" level=info msg="received exit event sandbox_id:\"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" exit_status:137 exited_at:{seconds:1747246215 nanos:295127498}" May 14 18:10:15.378465 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435-shm.mount: Deactivated successfully. May 14 18:10:15.379962 containerd[1533]: time="2025-05-14T18:10:15.379084754Z" level=info msg="TearDown network for sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" successfully" May 14 18:10:15.381248 containerd[1533]: time="2025-05-14T18:10:15.381202657Z" level=info msg="StopPodSandbox for \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" returns successfully" May 14 18:10:15.381420 containerd[1533]: time="2025-05-14T18:10:15.381352627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" id:\"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" pid:3012 exit_status:137 exited_at:{seconds:1747246215 nanos:295127498}" May 14 18:10:15.381875 containerd[1533]: time="2025-05-14T18:10:15.381826355Z" level=info msg="TearDown network for sandbox \"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\" successfully" May 14 18:10:15.381917 containerd[1533]: time="2025-05-14T18:10:15.381873525Z" level=info msg="StopPodSandbox for \"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\" returns successfully" May 14 18:10:15.382059 containerd[1533]: time="2025-05-14T18:10:15.382025724Z" level=info msg="received exit event sandbox_id:\"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\" exit_status:137 exited_at:{seconds:1747246215 nanos:268545164}" May 14 18:10:15.488294 kubelet[2802]: I0514 18:10:15.488219 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9p52\" (UniqueName: \"kubernetes.io/projected/a8477c48-0170-4eb0-b49c-9eaadad990cb-kube-api-access-l9p52\") pod \"a8477c48-0170-4eb0-b49c-9eaadad990cb\" (UID: \"a8477c48-0170-4eb0-b49c-9eaadad990cb\") " May 14 18:10:15.489350 kubelet[2802]: I0514 18:10:15.488570 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0586fba4-5080-424b-ac15-ac66e0a9d82f-hubble-tls\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.489350 kubelet[2802]: I0514 18:10:15.488602 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0586fba4-5080-424b-ac15-ac66e0a9d82f-cilium-config-path\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.489350 kubelet[2802]: I0514 18:10:15.488630 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcgcv\" (UniqueName: \"kubernetes.io/projected/0586fba4-5080-424b-ac15-ac66e0a9d82f-kube-api-access-kcgcv\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.489350 kubelet[2802]: I0514 18:10:15.488649 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-etc-cni-netd\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.489350 kubelet[2802]: I0514 18:10:15.488666 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-cilium-run\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.489350 kubelet[2802]: I0514 18:10:15.488684 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8477c48-0170-4eb0-b49c-9eaadad990cb-cilium-config-path\") pod \"a8477c48-0170-4eb0-b49c-9eaadad990cb\" (UID: \"a8477c48-0170-4eb0-b49c-9eaadad990cb\") " May 14 18:10:15.489535 kubelet[2802]: I0514 18:10:15.488714 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-bpf-maps\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.489535 kubelet[2802]: I0514 18:10:15.488730 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-xtables-lock\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.489535 kubelet[2802]: I0514 18:10:15.488750 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-hostproc\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.489535 kubelet[2802]: I0514 18:10:15.488766 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-host-proc-sys-net\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.489535 kubelet[2802]: I0514 18:10:15.488781 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-lib-modules\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.489535 kubelet[2802]: I0514 18:10:15.488796 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-cni-path\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.489686 kubelet[2802]: I0514 18:10:15.488813 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-host-proc-sys-kernel\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.489686 kubelet[2802]: I0514 18:10:15.488827 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-cilium-cgroup\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.489686 kubelet[2802]: I0514 18:10:15.488845 2802 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0586fba4-5080-424b-ac15-ac66e0a9d82f-clustermesh-secrets\") pod \"0586fba4-5080-424b-ac15-ac66e0a9d82f\" (UID: \"0586fba4-5080-424b-ac15-ac66e0a9d82f\") " May 14 18:10:15.490600 kubelet[2802]: I0514 18:10:15.490491 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:15.490775 kubelet[2802]: I0514 18:10:15.490742 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:15.490813 kubelet[2802]: I0514 18:10:15.490776 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-hostproc" (OuterVolumeSpecName: "hostproc") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:15.490813 kubelet[2802]: I0514 18:10:15.490803 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:15.490879 kubelet[2802]: I0514 18:10:15.490820 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:15.490879 kubelet[2802]: I0514 18:10:15.490835 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-cni-path" (OuterVolumeSpecName: "cni-path") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:15.490879 kubelet[2802]: I0514 18:10:15.490849 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:15.490879 kubelet[2802]: I0514 18:10:15.490865 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:15.490879 kubelet[2802]: I0514 18:10:15.490879 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:15.493048 kubelet[2802]: I0514 18:10:15.493016 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 18:10:15.497129 kubelet[2802]: I0514 18:10:15.496512 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0586fba4-5080-424b-ac15-ac66e0a9d82f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 18:10:15.498446 kubelet[2802]: I0514 18:10:15.498372 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8477c48-0170-4eb0-b49c-9eaadad990cb-kube-api-access-l9p52" (OuterVolumeSpecName: "kube-api-access-l9p52") pod "a8477c48-0170-4eb0-b49c-9eaadad990cb" (UID: "a8477c48-0170-4eb0-b49c-9eaadad990cb"). InnerVolumeSpecName "kube-api-access-l9p52". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:10:15.499802 kubelet[2802]: I0514 18:10:15.499766 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0586fba4-5080-424b-ac15-ac66e0a9d82f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:10:15.506088 kubelet[2802]: I0514 18:10:15.505874 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8477c48-0170-4eb0-b49c-9eaadad990cb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a8477c48-0170-4eb0-b49c-9eaadad990cb" (UID: "a8477c48-0170-4eb0-b49c-9eaadad990cb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:10:15.506088 kubelet[2802]: I0514 18:10:15.505976 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0586fba4-5080-424b-ac15-ac66e0a9d82f-kube-api-access-kcgcv" (OuterVolumeSpecName: "kube-api-access-kcgcv") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "kube-api-access-kcgcv". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 18:10:15.506088 kubelet[2802]: I0514 18:10:15.506020 2802 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0586fba4-5080-424b-ac15-ac66e0a9d82f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0586fba4-5080-424b-ac15-ac66e0a9d82f" (UID: "0586fba4-5080-424b-ac15-ac66e0a9d82f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 18:10:15.589298 kubelet[2802]: I0514 18:10:15.589247 2802 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-xtables-lock\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589298 kubelet[2802]: I0514 18:10:15.589284 2802 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-hostproc\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589298 kubelet[2802]: I0514 18:10:15.589294 2802 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-host-proc-sys-net\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589429 kubelet[2802]: I0514 18:10:15.589308 2802 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-lib-modules\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589429 kubelet[2802]: I0514 18:10:15.589318 2802 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-host-proc-sys-kernel\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589429 kubelet[2802]: I0514 18:10:15.589326 2802 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-cilium-cgroup\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589429 kubelet[2802]: I0514 18:10:15.589338 2802 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0586fba4-5080-424b-ac15-ac66e0a9d82f-clustermesh-secrets\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589429 kubelet[2802]: I0514 18:10:15.589345 2802 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-cni-path\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589429 kubelet[2802]: I0514 18:10:15.589361 2802 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0586fba4-5080-424b-ac15-ac66e0a9d82f-hubble-tls\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589429 kubelet[2802]: I0514 18:10:15.589372 2802 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l9p52\" (UniqueName: \"kubernetes.io/projected/a8477c48-0170-4eb0-b49c-9eaadad990cb-kube-api-access-l9p52\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589429 kubelet[2802]: I0514 18:10:15.589380 2802 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0586fba4-5080-424b-ac15-ac66e0a9d82f-cilium-config-path\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589621 kubelet[2802]: I0514 18:10:15.589388 2802 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kcgcv\" (UniqueName: \"kubernetes.io/projected/0586fba4-5080-424b-ac15-ac66e0a9d82f-kube-api-access-kcgcv\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589621 kubelet[2802]: I0514 18:10:15.589398 2802 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-etc-cni-netd\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589621 kubelet[2802]: I0514 18:10:15.589413 2802 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-cilium-run\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589621 kubelet[2802]: I0514 18:10:15.589423 2802 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8477c48-0170-4eb0-b49c-9eaadad990cb-cilium-config-path\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.589621 kubelet[2802]: I0514 18:10:15.589435 2802 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0586fba4-5080-424b-ac15-ac66e0a9d82f-bpf-maps\") on node \"172-236-122-223\" DevicePath \"\"" May 14 18:10:15.899814 kubelet[2802]: I0514 18:10:15.899765 2802 scope.go:117] "RemoveContainer" containerID="c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf" May 14 18:10:15.903335 containerd[1533]: time="2025-05-14T18:10:15.903067786Z" level=info msg="RemoveContainer for \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\"" May 14 18:10:15.910975 containerd[1533]: time="2025-05-14T18:10:15.910933250Z" level=info msg="RemoveContainer for \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\" returns successfully" May 14 18:10:15.915060 kubelet[2802]: I0514 18:10:15.915017 2802 scope.go:117] "RemoveContainer" containerID="32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd" May 14 18:10:15.917422 systemd[1]: Removed slice kubepods-burstable-pod0586fba4_5080_424b_ac15_ac66e0a9d82f.slice - libcontainer container kubepods-burstable-pod0586fba4_5080_424b_ac15_ac66e0a9d82f.slice. May 14 18:10:15.918672 containerd[1533]: time="2025-05-14T18:10:15.917656898Z" level=info msg="RemoveContainer for \"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\"" May 14 18:10:15.917529 systemd[1]: kubepods-burstable-pod0586fba4_5080_424b_ac15_ac66e0a9d82f.slice: Consumed 8.304s CPU time, 123.9M memory peak, 128K read from disk, 13.3M written to disk. May 14 18:10:15.928216 containerd[1533]: time="2025-05-14T18:10:15.927821945Z" level=info msg="RemoveContainer for \"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\" returns successfully" May 14 18:10:15.930421 kubelet[2802]: I0514 18:10:15.930349 2802 scope.go:117] "RemoveContainer" containerID="06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc" May 14 18:10:15.935825 containerd[1533]: time="2025-05-14T18:10:15.935795929Z" level=info msg="RemoveContainer for \"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\"" May 14 18:10:15.941994 systemd[1]: Removed slice kubepods-besteffort-poda8477c48_0170_4eb0_b49c_9eaadad990cb.slice - libcontainer container kubepods-besteffort-poda8477c48_0170_4eb0_b49c_9eaadad990cb.slice. May 14 18:10:15.949096 containerd[1533]: time="2025-05-14T18:10:15.947748670Z" level=info msg="RemoveContainer for \"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\" returns successfully" May 14 18:10:15.949897 kubelet[2802]: I0514 18:10:15.949844 2802 scope.go:117] "RemoveContainer" containerID="a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9" May 14 18:10:15.956389 containerd[1533]: time="2025-05-14T18:10:15.956116133Z" level=info msg="RemoveContainer for \"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\"" May 14 18:10:15.961378 containerd[1533]: time="2025-05-14T18:10:15.961331206Z" level=info msg="RemoveContainer for \"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\" returns successfully" May 14 18:10:15.961820 kubelet[2802]: I0514 18:10:15.961616 2802 scope.go:117] "RemoveContainer" containerID="df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5" May 14 18:10:15.963723 containerd[1533]: time="2025-05-14T18:10:15.963665828Z" level=info msg="RemoveContainer for \"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\"" May 14 18:10:15.966618 containerd[1533]: time="2025-05-14T18:10:15.966518639Z" level=info msg="RemoveContainer for \"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\" returns successfully" May 14 18:10:15.967620 kubelet[2802]: I0514 18:10:15.967452 2802 scope.go:117] "RemoveContainer" containerID="c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf" May 14 18:10:15.967854 containerd[1533]: time="2025-05-14T18:10:15.967804725Z" level=error msg="ContainerStatus for \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\": not found" May 14 18:10:15.968430 kubelet[2802]: E0514 18:10:15.968043 2802 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\": not found" containerID="c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf" May 14 18:10:15.968828 kubelet[2802]: I0514 18:10:15.968574 2802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf"} err="failed to get container status \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0f3fc1d155fceae85dc327406b2f86afed082a3ef412d7d67143abffecaf3bf\": not found" May 14 18:10:15.968828 kubelet[2802]: I0514 18:10:15.968690 2802 scope.go:117] "RemoveContainer" containerID="32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd" May 14 18:10:15.969925 containerd[1533]: time="2025-05-14T18:10:15.969453789Z" level=error msg="ContainerStatus for \"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\": not found" May 14 18:10:15.970816 kubelet[2802]: E0514 18:10:15.970796 2802 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\": not found" containerID="32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd" May 14 18:10:15.970885 kubelet[2802]: I0514 18:10:15.970869 2802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd"} err="failed to get container status \"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\": rpc error: code = NotFound desc = an error occurred when try to find container \"32b2ba313f8c3c01fd58585db953d7d46cdf5cc2025730256e286c323713cadd\": not found" May 14 18:10:15.971033 kubelet[2802]: I0514 18:10:15.970962 2802 scope.go:117] "RemoveContainer" containerID="06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc" May 14 18:10:15.971302 containerd[1533]: time="2025-05-14T18:10:15.971249074Z" level=error msg="ContainerStatus for \"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\": not found" May 14 18:10:15.971474 kubelet[2802]: E0514 18:10:15.971433 2802 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\": not found" containerID="06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc" May 14 18:10:15.971772 kubelet[2802]: I0514 18:10:15.971705 2802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc"} err="failed to get container status \"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"06894e7cee091937ab749dff7a639fb7940c19614f0c5ec5a7a33737f0a722cc\": not found" May 14 18:10:15.971772 kubelet[2802]: I0514 18:10:15.971724 2802 scope.go:117] "RemoveContainer" containerID="a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9" May 14 18:10:15.972268 containerd[1533]: time="2025-05-14T18:10:15.972191771Z" level=error msg="ContainerStatus for \"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\": not found" May 14 18:10:15.972421 kubelet[2802]: E0514 18:10:15.972361 2802 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\": not found" containerID="a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9" May 14 18:10:15.972421 kubelet[2802]: I0514 18:10:15.972377 2802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9"} err="failed to get container status \"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"a248426fc3fc7c9022d0eaf9c8e0db1fe8322006c0c0f0251fc49de52abe79d9\": not found" May 14 18:10:15.972650 kubelet[2802]: I0514 18:10:15.972607 2802 scope.go:117] "RemoveContainer" containerID="df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5" May 14 18:10:15.972916 containerd[1533]: time="2025-05-14T18:10:15.972858198Z" level=error msg="ContainerStatus for \"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\": not found" May 14 18:10:15.973232 kubelet[2802]: E0514 18:10:15.973187 2802 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\": not found" containerID="df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5" May 14 18:10:15.973390 kubelet[2802]: I0514 18:10:15.973208 2802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5"} err="failed to get container status \"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"df1d88cb524090a6647f674a5efd81dce7439ad356e4f15b29cb6af8290034c5\": not found" May 14 18:10:15.973390 kubelet[2802]: I0514 18:10:15.973327 2802 scope.go:117] "RemoveContainer" containerID="a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664" May 14 18:10:15.975694 containerd[1533]: time="2025-05-14T18:10:15.975671589Z" level=info msg="RemoveContainer for \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\"" May 14 18:10:15.978792 containerd[1533]: time="2025-05-14T18:10:15.978767289Z" level=info msg="RemoveContainer for \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\" returns successfully" May 14 18:10:15.978991 kubelet[2802]: I0514 18:10:15.978961 2802 scope.go:117] "RemoveContainer" containerID="a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664" May 14 18:10:15.979486 containerd[1533]: time="2025-05-14T18:10:15.979407697Z" level=error msg="ContainerStatus for \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\": not found" May 14 18:10:15.979716 kubelet[2802]: E0514 18:10:15.979653 2802 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\": not found" containerID="a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664" May 14 18:10:15.979716 kubelet[2802]: I0514 18:10:15.979675 2802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664"} err="failed to get container status \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\": rpc error: code = NotFound desc = an error occurred when try to find container \"a33ae1d1e253182e110f9b4775f8a1c40bf73d13e23bbf5ef6ff51c075439664\": not found" May 14 18:10:16.178121 systemd[1]: var-lib-kubelet-pods-0586fba4\x2d5080\x2d424b\x2dac15\x2dac66e0a9d82f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkcgcv.mount: Deactivated successfully. May 14 18:10:16.178716 systemd[1]: var-lib-kubelet-pods-0586fba4\x2d5080\x2d424b\x2dac15\x2dac66e0a9d82f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 18:10:16.178795 systemd[1]: var-lib-kubelet-pods-0586fba4\x2d5080\x2d424b\x2dac15\x2dac66e0a9d82f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 18:10:16.179262 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7-shm.mount: Deactivated successfully. May 14 18:10:16.179589 systemd[1]: var-lib-kubelet-pods-a8477c48\x2d0170\x2d4eb0\x2db49c\x2d9eaadad990cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl9p52.mount: Deactivated successfully. May 14 18:10:17.088260 sshd[4640]: Connection closed by 147.75.109.163 port 37756 May 14 18:10:17.089505 sshd-session[4638]: pam_unix(sshd:session): session closed for user core May 14 18:10:17.095953 systemd-logind[1515]: Session 55 logged out. Waiting for processes to exit. May 14 18:10:17.096982 systemd[1]: sshd@57-172.236.122.223:22-147.75.109.163:37756.service: Deactivated successfully. May 14 18:10:17.100583 systemd[1]: session-55.scope: Deactivated successfully. May 14 18:10:17.102670 systemd-logind[1515]: Removed session 55. May 14 18:10:17.128301 kubelet[2802]: I0514 18:10:17.127619 2802 setters.go:580] "Node became not ready" node="172-236-122-223" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T18:10:17Z","lastTransitionTime":"2025-05-14T18:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 18:10:17.154447 systemd[1]: Started sshd@58-172.236.122.223:22-147.75.109.163:37768.service - OpenSSH per-connection server daemon (147.75.109.163:37768). May 14 18:10:17.491295 sshd[4799]: Accepted publickey for core from 147.75.109.163 port 37768 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:10:17.492560 sshd-session[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:17.498857 systemd-logind[1515]: New session 56 of user core. May 14 18:10:17.506340 systemd[1]: Started session-56.scope - Session 56 of User core. May 14 18:10:17.910088 kubelet[2802]: I0514 18:10:17.910035 2802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0586fba4-5080-424b-ac15-ac66e0a9d82f" path="/var/lib/kubelet/pods/0586fba4-5080-424b-ac15-ac66e0a9d82f/volumes" May 14 18:10:17.913151 kubelet[2802]: I0514 18:10:17.911350 2802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8477c48-0170-4eb0-b49c-9eaadad990cb" path="/var/lib/kubelet/pods/a8477c48-0170-4eb0-b49c-9eaadad990cb/volumes" May 14 18:10:18.330865 sshd[4801]: Connection closed by 147.75.109.163 port 37768 May 14 18:10:18.332086 sshd-session[4799]: pam_unix(sshd:session): session closed for user core May 14 18:10:18.344020 systemd[1]: sshd@58-172.236.122.223:22-147.75.109.163:37768.service: Deactivated successfully. May 14 18:10:18.344318 systemd-logind[1515]: Session 56 logged out. Waiting for processes to exit. May 14 18:10:18.349304 systemd[1]: session-56.scope: Deactivated successfully. May 14 18:10:18.351822 systemd-logind[1515]: Removed session 56. May 14 18:10:18.357419 kubelet[2802]: I0514 18:10:18.357337 2802 topology_manager.go:215] "Topology Admit Handler" podUID="d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0" podNamespace="kube-system" podName="cilium-4g5h4" May 14 18:10:18.357941 kubelet[2802]: E0514 18:10:18.357448 2802 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0586fba4-5080-424b-ac15-ac66e0a9d82f" containerName="mount-cgroup" May 14 18:10:18.357941 kubelet[2802]: E0514 18:10:18.357459 2802 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0586fba4-5080-424b-ac15-ac66e0a9d82f" containerName="clean-cilium-state" May 14 18:10:18.357941 kubelet[2802]: E0514 18:10:18.357469 2802 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0586fba4-5080-424b-ac15-ac66e0a9d82f" containerName="cilium-agent" May 14 18:10:18.357941 kubelet[2802]: E0514 18:10:18.357481 2802 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a8477c48-0170-4eb0-b49c-9eaadad990cb" containerName="cilium-operator" May 14 18:10:18.357941 kubelet[2802]: E0514 18:10:18.357495 2802 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0586fba4-5080-424b-ac15-ac66e0a9d82f" containerName="apply-sysctl-overwrites" May 14 18:10:18.357941 kubelet[2802]: E0514 18:10:18.357501 2802 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0586fba4-5080-424b-ac15-ac66e0a9d82f" containerName="mount-bpf-fs" May 14 18:10:18.357941 kubelet[2802]: I0514 18:10:18.357567 2802 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8477c48-0170-4eb0-b49c-9eaadad990cb" containerName="cilium-operator" May 14 18:10:18.357941 kubelet[2802]: I0514 18:10:18.357573 2802 memory_manager.go:354] "RemoveStaleState removing state" podUID="0586fba4-5080-424b-ac15-ac66e0a9d82f" containerName="cilium-agent" May 14 18:10:18.368757 systemd[1]: Created slice kubepods-burstable-podd90b40e6_94ac_4b6d_99e2_2e0c6b992ba0.slice - libcontainer container kubepods-burstable-podd90b40e6_94ac_4b6d_99e2_2e0c6b992ba0.slice. May 14 18:10:18.398278 systemd[1]: Started sshd@59-172.236.122.223:22-147.75.109.163:48130.service - OpenSSH per-connection server daemon (147.75.109.163:48130). May 14 18:10:18.407347 kubelet[2802]: I0514 18:10:18.407226 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-bpf-maps\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.407717 kubelet[2802]: I0514 18:10:18.407303 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-xtables-lock\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.407786 kubelet[2802]: I0514 18:10:18.407726 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-clustermesh-secrets\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.407786 kubelet[2802]: I0514 18:10:18.407773 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-cilium-cgroup\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.407866 kubelet[2802]: I0514 18:10:18.407801 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-etc-cni-netd\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.407892 kubelet[2802]: I0514 18:10:18.407854 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwms5\" (UniqueName: \"kubernetes.io/projected/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-kube-api-access-rwms5\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.407920 kubelet[2802]: I0514 18:10:18.407896 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-cilium-run\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.408269 kubelet[2802]: I0514 18:10:18.408102 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-cni-path\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.408269 kubelet[2802]: I0514 18:10:18.408163 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-lib-modules\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.408269 kubelet[2802]: I0514 18:10:18.408192 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-host-proc-sys-kernel\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.408269 kubelet[2802]: I0514 18:10:18.408211 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-host-proc-sys-net\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.409226 kubelet[2802]: I0514 18:10:18.409189 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-hostproc\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.409272 kubelet[2802]: I0514 18:10:18.409221 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-cilium-config-path\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.409272 kubelet[2802]: I0514 18:10:18.409250 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-cilium-ipsec-secrets\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.409323 kubelet[2802]: I0514 18:10:18.409280 2802 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0-hubble-tls\") pod \"cilium-4g5h4\" (UID: \"d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0\") " pod="kube-system/cilium-4g5h4" May 14 18:10:18.672964 kubelet[2802]: E0514 18:10:18.672780 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:18.675456 containerd[1533]: time="2025-05-14T18:10:18.675212376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4g5h4,Uid:d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0,Namespace:kube-system,Attempt:0,}" May 14 18:10:18.705419 containerd[1533]: time="2025-05-14T18:10:18.705095101Z" level=info msg="connecting to shim b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1" address="unix:///run/containerd/s/b3e85ba00d679ff17d2c799d52561f229b33fe35bbf0fe019babf5300dde0871" namespace=k8s.io protocol=ttrpc version=3 May 14 18:10:18.744326 systemd[1]: Started cri-containerd-b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1.scope - libcontainer container b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1. May 14 18:10:18.755622 sshd[4812]: Accepted publickey for core from 147.75.109.163 port 48130 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:10:18.757092 sshd-session[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:18.766049 systemd-logind[1515]: New session 57 of user core. May 14 18:10:18.770361 systemd[1]: Started session-57.scope - Session 57 of User core. May 14 18:10:18.786578 containerd[1533]: time="2025-05-14T18:10:18.786525240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4g5h4,Uid:d90b40e6-94ac-4b6d-99e2-2e0c6b992ba0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1\"" May 14 18:10:18.790116 kubelet[2802]: E0514 18:10:18.790080 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:18.800058 containerd[1533]: time="2025-05-14T18:10:18.800028907Z" level=info msg="CreateContainer within sandbox \"b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 18:10:18.812087 containerd[1533]: time="2025-05-14T18:10:18.811396520Z" level=info msg="Container 77ee124921cd05dfa842a5838b5d25648e32c57c704c99631a60457cb2128099: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:18.822056 containerd[1533]: time="2025-05-14T18:10:18.821998497Z" level=info msg="CreateContainer within sandbox \"b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"77ee124921cd05dfa842a5838b5d25648e32c57c704c99631a60457cb2128099\"" May 14 18:10:18.822997 containerd[1533]: time="2025-05-14T18:10:18.822816994Z" level=info msg="StartContainer for \"77ee124921cd05dfa842a5838b5d25648e32c57c704c99631a60457cb2128099\"" May 14 18:10:18.823864 containerd[1533]: time="2025-05-14T18:10:18.823831991Z" level=info msg="connecting to shim 77ee124921cd05dfa842a5838b5d25648e32c57c704c99631a60457cb2128099" address="unix:///run/containerd/s/b3e85ba00d679ff17d2c799d52561f229b33fe35bbf0fe019babf5300dde0871" protocol=ttrpc version=3 May 14 18:10:18.846261 systemd[1]: Started cri-containerd-77ee124921cd05dfa842a5838b5d25648e32c57c704c99631a60457cb2128099.scope - libcontainer container 77ee124921cd05dfa842a5838b5d25648e32c57c704c99631a60457cb2128099. May 14 18:10:18.880263 containerd[1533]: time="2025-05-14T18:10:18.880200220Z" level=info msg="StartContainer for \"77ee124921cd05dfa842a5838b5d25648e32c57c704c99631a60457cb2128099\" returns successfully" May 14 18:10:18.897946 systemd[1]: cri-containerd-77ee124921cd05dfa842a5838b5d25648e32c57c704c99631a60457cb2128099.scope: Deactivated successfully. May 14 18:10:18.902084 containerd[1533]: time="2025-05-14T18:10:18.902040300Z" level=info msg="received exit event container_id:\"77ee124921cd05dfa842a5838b5d25648e32c57c704c99631a60457cb2128099\" id:\"77ee124921cd05dfa842a5838b5d25648e32c57c704c99631a60457cb2128099\" pid:4878 exited_at:{seconds:1747246218 nanos:901521852}" May 14 18:10:18.902963 containerd[1533]: time="2025-05-14T18:10:18.902936287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77ee124921cd05dfa842a5838b5d25648e32c57c704c99631a60457cb2128099\" id:\"77ee124921cd05dfa842a5838b5d25648e32c57c704c99631a60457cb2128099\" pid:4878 exited_at:{seconds:1747246218 nanos:901521852}" May 14 18:10:18.920196 kubelet[2802]: E0514 18:10:18.919043 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:19.001477 sshd[4858]: Connection closed by 147.75.109.163 port 48130 May 14 18:10:19.002458 sshd-session[4812]: pam_unix(sshd:session): session closed for user core May 14 18:10:19.008564 systemd-logind[1515]: Session 57 logged out. Waiting for processes to exit. May 14 18:10:19.009077 systemd[1]: sshd@59-172.236.122.223:22-147.75.109.163:48130.service: Deactivated successfully. May 14 18:10:19.011861 systemd[1]: session-57.scope: Deactivated successfully. May 14 18:10:19.015524 systemd-logind[1515]: Removed session 57. May 14 18:10:19.065299 systemd[1]: Started sshd@60-172.236.122.223:22-147.75.109.163:48144.service - OpenSSH per-connection server daemon (147.75.109.163:48144). May 14 18:10:19.399011 sshd[4917]: Accepted publickey for core from 147.75.109.163 port 48144 ssh2: RSA SHA256:BneGPm842G8RhChK+XxaOPcgIpPYV8BaRtoVqDfewsc May 14 18:10:19.400772 sshd-session[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:10:19.406066 systemd-logind[1515]: New session 58 of user core. May 14 18:10:19.409280 systemd[1]: Started session-58.scope - Session 58 of User core. May 14 18:10:19.521556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1857949208.mount: Deactivated successfully. May 14 18:10:19.921182 kubelet[2802]: E0514 18:10:19.921119 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:19.922039 containerd[1533]: time="2025-05-14T18:10:19.921995121Z" level=info msg="StopPodSandbox for \"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\"" May 14 18:10:19.922333 containerd[1533]: time="2025-05-14T18:10:19.922216760Z" level=info msg="TearDown network for sandbox \"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\" successfully" May 14 18:10:19.922333 containerd[1533]: time="2025-05-14T18:10:19.922229610Z" level=info msg="StopPodSandbox for \"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\" returns successfully" May 14 18:10:19.923242 containerd[1533]: time="2025-05-14T18:10:19.923218127Z" level=info msg="RemovePodSandbox for \"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\"" May 14 18:10:19.923293 containerd[1533]: time="2025-05-14T18:10:19.923242667Z" level=info msg="Forcibly stopping sandbox \"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\"" May 14 18:10:19.923329 containerd[1533]: time="2025-05-14T18:10:19.923306127Z" level=info msg="TearDown network for sandbox \"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\" successfully" May 14 18:10:19.925104 containerd[1533]: time="2025-05-14T18:10:19.925081121Z" level=info msg="CreateContainer within sandbox \"b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 18:10:19.925352 containerd[1533]: time="2025-05-14T18:10:19.925222431Z" level=info msg="Ensure that sandbox eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7 in task-service has been cleanup successfully" May 14 18:10:19.928311 containerd[1533]: time="2025-05-14T18:10:19.928186751Z" level=info msg="RemovePodSandbox \"eca8879e9da9ee259ca6d2b14ab1455285ef5f991fcabffdd1349edb1be8fbf7\" returns successfully" May 14 18:10:19.929286 containerd[1533]: time="2025-05-14T18:10:19.929263348Z" level=info msg="StopPodSandbox for \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\"" May 14 18:10:19.929536 containerd[1533]: time="2025-05-14T18:10:19.929453377Z" level=info msg="TearDown network for sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" successfully" May 14 18:10:19.929536 containerd[1533]: time="2025-05-14T18:10:19.929470637Z" level=info msg="StopPodSandbox for \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" returns successfully" May 14 18:10:19.929947 containerd[1533]: time="2025-05-14T18:10:19.929926326Z" level=info msg="RemovePodSandbox for \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\"" May 14 18:10:19.930041 containerd[1533]: time="2025-05-14T18:10:19.930023235Z" level=info msg="Forcibly stopping sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\"" May 14 18:10:19.930236 containerd[1533]: time="2025-05-14T18:10:19.930217385Z" level=info msg="TearDown network for sandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" successfully" May 14 18:10:19.932274 containerd[1533]: time="2025-05-14T18:10:19.931741610Z" level=info msg="Ensure that sandbox 24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435 in task-service has been cleanup successfully" May 14 18:10:19.936625 containerd[1533]: time="2025-05-14T18:10:19.936599394Z" level=info msg="RemovePodSandbox \"24df22680db71555d5651fff735295005be3d035ad47f8e26718c3389dcb7435\" returns successfully" May 14 18:10:19.944320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2772723770.mount: Deactivated successfully. May 14 18:10:19.946343 containerd[1533]: time="2025-05-14T18:10:19.946298043Z" level=info msg="Container 5c31ea8652bad4db509c771feffa8b963e72dd48186ab3b672a24cde41de1990: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:19.954673 containerd[1533]: time="2025-05-14T18:10:19.954634117Z" level=info msg="CreateContainer within sandbox \"b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c31ea8652bad4db509c771feffa8b963e72dd48186ab3b672a24cde41de1990\"" May 14 18:10:19.955719 containerd[1533]: time="2025-05-14T18:10:19.955691154Z" level=info msg="StartContainer for \"5c31ea8652bad4db509c771feffa8b963e72dd48186ab3b672a24cde41de1990\"" May 14 18:10:19.956820 containerd[1533]: time="2025-05-14T18:10:19.956795270Z" level=info msg="connecting to shim 5c31ea8652bad4db509c771feffa8b963e72dd48186ab3b672a24cde41de1990" address="unix:///run/containerd/s/b3e85ba00d679ff17d2c799d52561f229b33fe35bbf0fe019babf5300dde0871" protocol=ttrpc version=3 May 14 18:10:19.984309 systemd[1]: Started cri-containerd-5c31ea8652bad4db509c771feffa8b963e72dd48186ab3b672a24cde41de1990.scope - libcontainer container 5c31ea8652bad4db509c771feffa8b963e72dd48186ab3b672a24cde41de1990. May 14 18:10:20.029396 containerd[1533]: time="2025-05-14T18:10:20.029347629Z" level=info msg="StartContainer for \"5c31ea8652bad4db509c771feffa8b963e72dd48186ab3b672a24cde41de1990\" returns successfully" May 14 18:10:20.045426 systemd[1]: cri-containerd-5c31ea8652bad4db509c771feffa8b963e72dd48186ab3b672a24cde41de1990.scope: Deactivated successfully. May 14 18:10:20.045998 containerd[1533]: time="2025-05-14T18:10:20.045971607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c31ea8652bad4db509c771feffa8b963e72dd48186ab3b672a24cde41de1990\" id:\"5c31ea8652bad4db509c771feffa8b963e72dd48186ab3b672a24cde41de1990\" pid:4940 exited_at:{seconds:1747246220 nanos:45462408}" May 14 18:10:20.046085 containerd[1533]: time="2025-05-14T18:10:20.046052387Z" level=info msg="received exit event container_id:\"5c31ea8652bad4db509c771feffa8b963e72dd48186ab3b672a24cde41de1990\" id:\"5c31ea8652bad4db509c771feffa8b963e72dd48186ab3b672a24cde41de1990\" pid:4940 exited_at:{seconds:1747246220 nanos:45462408}" May 14 18:10:20.230391 kubelet[2802]: E0514 18:10:20.230116 2802 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 18:10:20.521832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c31ea8652bad4db509c771feffa8b963e72dd48186ab3b672a24cde41de1990-rootfs.mount: Deactivated successfully. May 14 18:10:20.926910 kubelet[2802]: E0514 18:10:20.926404 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:20.930158 containerd[1533]: time="2025-05-14T18:10:20.930090719Z" level=info msg="CreateContainer within sandbox \"b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 18:10:20.943502 containerd[1533]: time="2025-05-14T18:10:20.943450337Z" level=info msg="Container 52cc273e95f04e2e463a3ea1a2b47825bf3444bb0153d9f454901d0637bbd827: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:20.947405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount168681038.mount: Deactivated successfully. May 14 18:10:20.954771 containerd[1533]: time="2025-05-14T18:10:20.954719831Z" level=info msg="CreateContainer within sandbox \"b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"52cc273e95f04e2e463a3ea1a2b47825bf3444bb0153d9f454901d0637bbd827\"" May 14 18:10:20.956129 containerd[1533]: time="2025-05-14T18:10:20.956089957Z" level=info msg="StartContainer for \"52cc273e95f04e2e463a3ea1a2b47825bf3444bb0153d9f454901d0637bbd827\"" May 14 18:10:20.963161 containerd[1533]: time="2025-05-14T18:10:20.959516666Z" level=info msg="connecting to shim 52cc273e95f04e2e463a3ea1a2b47825bf3444bb0153d9f454901d0637bbd827" address="unix:///run/containerd/s/b3e85ba00d679ff17d2c799d52561f229b33fe35bbf0fe019babf5300dde0871" protocol=ttrpc version=3 May 14 18:10:21.017366 systemd[1]: Started cri-containerd-52cc273e95f04e2e463a3ea1a2b47825bf3444bb0153d9f454901d0637bbd827.scope - libcontainer container 52cc273e95f04e2e463a3ea1a2b47825bf3444bb0153d9f454901d0637bbd827. May 14 18:10:21.141193 containerd[1533]: time="2025-05-14T18:10:21.140176256Z" level=info msg="StartContainer for \"52cc273e95f04e2e463a3ea1a2b47825bf3444bb0153d9f454901d0637bbd827\" returns successfully" May 14 18:10:21.159892 systemd[1]: cri-containerd-52cc273e95f04e2e463a3ea1a2b47825bf3444bb0153d9f454901d0637bbd827.scope: Deactivated successfully. May 14 18:10:21.162319 containerd[1533]: time="2025-05-14T18:10:21.162284277Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52cc273e95f04e2e463a3ea1a2b47825bf3444bb0153d9f454901d0637bbd827\" id:\"52cc273e95f04e2e463a3ea1a2b47825bf3444bb0153d9f454901d0637bbd827\" pid:4984 exited_at:{seconds:1747246221 nanos:161776858}" May 14 18:10:21.162625 containerd[1533]: time="2025-05-14T18:10:21.162462326Z" level=info msg="received exit event container_id:\"52cc273e95f04e2e463a3ea1a2b47825bf3444bb0153d9f454901d0637bbd827\" id:\"52cc273e95f04e2e463a3ea1a2b47825bf3444bb0153d9f454901d0637bbd827\" pid:4984 exited_at:{seconds:1747246221 nanos:161776858}" May 14 18:10:21.189294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52cc273e95f04e2e463a3ea1a2b47825bf3444bb0153d9f454901d0637bbd827-rootfs.mount: Deactivated successfully. May 14 18:10:21.935045 kubelet[2802]: E0514 18:10:21.933269 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:21.938681 containerd[1533]: time="2025-05-14T18:10:21.938488704Z" level=info msg="CreateContainer within sandbox \"b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 18:10:21.948169 containerd[1533]: time="2025-05-14T18:10:21.947963074Z" level=info msg="Container 7fd5c5a0c701aab326e9fd54da0d563f69b637d807c2bc41f7e3f96751d1ff62: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:21.953554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2712125774.mount: Deactivated successfully. May 14 18:10:21.959959 containerd[1533]: time="2025-05-14T18:10:21.959913186Z" level=info msg="CreateContainer within sandbox \"b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7fd5c5a0c701aab326e9fd54da0d563f69b637d807c2bc41f7e3f96751d1ff62\"" May 14 18:10:21.961167 containerd[1533]: time="2025-05-14T18:10:21.961065493Z" level=info msg="StartContainer for \"7fd5c5a0c701aab326e9fd54da0d563f69b637d807c2bc41f7e3f96751d1ff62\"" May 14 18:10:21.963008 containerd[1533]: time="2025-05-14T18:10:21.962974547Z" level=info msg="connecting to shim 7fd5c5a0c701aab326e9fd54da0d563f69b637d807c2bc41f7e3f96751d1ff62" address="unix:///run/containerd/s/b3e85ba00d679ff17d2c799d52561f229b33fe35bbf0fe019babf5300dde0871" protocol=ttrpc version=3 May 14 18:10:21.986297 systemd[1]: Started cri-containerd-7fd5c5a0c701aab326e9fd54da0d563f69b637d807c2bc41f7e3f96751d1ff62.scope - libcontainer container 7fd5c5a0c701aab326e9fd54da0d563f69b637d807c2bc41f7e3f96751d1ff62. May 14 18:10:22.017637 systemd[1]: cri-containerd-7fd5c5a0c701aab326e9fd54da0d563f69b637d807c2bc41f7e3f96751d1ff62.scope: Deactivated successfully. May 14 18:10:22.019277 containerd[1533]: time="2025-05-14T18:10:22.019221010Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fd5c5a0c701aab326e9fd54da0d563f69b637d807c2bc41f7e3f96751d1ff62\" id:\"7fd5c5a0c701aab326e9fd54da0d563f69b637d807c2bc41f7e3f96751d1ff62\" pid:5024 exited_at:{seconds:1747246222 nanos:17767565}" May 14 18:10:22.019527 containerd[1533]: time="2025-05-14T18:10:22.019392000Z" level=info msg="received exit event container_id:\"7fd5c5a0c701aab326e9fd54da0d563f69b637d807c2bc41f7e3f96751d1ff62\" id:\"7fd5c5a0c701aab326e9fd54da0d563f69b637d807c2bc41f7e3f96751d1ff62\" pid:5024 exited_at:{seconds:1747246222 nanos:17767565}" May 14 18:10:22.028186 containerd[1533]: time="2025-05-14T18:10:22.028122132Z" level=info msg="StartContainer for \"7fd5c5a0c701aab326e9fd54da0d563f69b637d807c2bc41f7e3f96751d1ff62\" returns successfully" May 14 18:10:22.047781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fd5c5a0c701aab326e9fd54da0d563f69b637d807c2bc41f7e3f96751d1ff62-rootfs.mount: Deactivated successfully. May 14 18:10:22.939116 kubelet[2802]: E0514 18:10:22.938552 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:22.942344 containerd[1533]: time="2025-05-14T18:10:22.942290321Z" level=info msg="CreateContainer within sandbox \"b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 18:10:22.956338 containerd[1533]: time="2025-05-14T18:10:22.956173058Z" level=info msg="Container 344b8ea9a50e01edc32567246a153b515f21f5cb7482a955d424bae296b006e1: CDI devices from CRI Config.CDIDevices: []" May 14 18:10:22.960720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642270484.mount: Deactivated successfully. May 14 18:10:22.967587 containerd[1533]: time="2025-05-14T18:10:22.967563972Z" level=info msg="CreateContainer within sandbox \"b9491832650d844869bfe6217fe592c27d79b4061d728ab99e699e9d9cee23c1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"344b8ea9a50e01edc32567246a153b515f21f5cb7482a955d424bae296b006e1\"" May 14 18:10:22.968061 containerd[1533]: time="2025-05-14T18:10:22.968041580Z" level=info msg="StartContainer for \"344b8ea9a50e01edc32567246a153b515f21f5cb7482a955d424bae296b006e1\"" May 14 18:10:22.970071 containerd[1533]: time="2025-05-14T18:10:22.970013854Z" level=info msg="connecting to shim 344b8ea9a50e01edc32567246a153b515f21f5cb7482a955d424bae296b006e1" address="unix:///run/containerd/s/b3e85ba00d679ff17d2c799d52561f229b33fe35bbf0fe019babf5300dde0871" protocol=ttrpc version=3 May 14 18:10:22.991272 systemd[1]: Started cri-containerd-344b8ea9a50e01edc32567246a153b515f21f5cb7482a955d424bae296b006e1.scope - libcontainer container 344b8ea9a50e01edc32567246a153b515f21f5cb7482a955d424bae296b006e1. May 14 18:10:23.037022 containerd[1533]: time="2025-05-14T18:10:23.036933235Z" level=info msg="StartContainer for \"344b8ea9a50e01edc32567246a153b515f21f5cb7482a955d424bae296b006e1\" returns successfully" May 14 18:10:23.199368 containerd[1533]: time="2025-05-14T18:10:23.198508693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"344b8ea9a50e01edc32567246a153b515f21f5cb7482a955d424bae296b006e1\" id:\"7baaa21d0723a7a943563f3c970bdd53c5984f24fe5decad74e236c7f2530fb9\" pid:5088 exited_at:{seconds:1747246223 nanos:198001584}" May 14 18:10:23.613656 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 14 18:10:23.946998 kubelet[2802]: E0514 18:10:23.946526 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:23.964820 kubelet[2802]: I0514 18:10:23.964764 2802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4g5h4" podStartSLOduration=5.964748728 podStartE2EDuration="5.964748728s" podCreationTimestamp="2025-05-14 18:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:10:23.964606978 +0000 UTC m=+364.171946960" watchObservedRunningTime="2025-05-14 18:10:23.964748728 +0000 UTC m=+364.172088700" May 14 18:10:24.948503 kubelet[2802]: E0514 18:10:24.948432 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:25.993297 containerd[1533]: time="2025-05-14T18:10:25.992952457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"344b8ea9a50e01edc32567246a153b515f21f5cb7482a955d424bae296b006e1\" id:\"b5b89e0e262009e75ed227adb9a817da42bc5b23418658ff5f965db815f5d0c2\" pid:5394 exit_status:1 exited_at:{seconds:1747246225 nanos:991346382}" May 14 18:10:26.455258 systemd-networkd[1457]: lxc_health: Link UP May 14 18:10:26.460506 systemd-networkd[1457]: lxc_health: Gained carrier May 14 18:10:26.675906 kubelet[2802]: E0514 18:10:26.675846 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:26.905461 kubelet[2802]: E0514 18:10:26.905392 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:26.955092 kubelet[2802]: E0514 18:10:26.954579 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:27.483277 systemd-networkd[1457]: lxc_health: Gained IPv6LL May 14 18:10:27.959304 kubelet[2802]: E0514 18:10:27.958807 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:28.268541 containerd[1533]: time="2025-05-14T18:10:28.267970437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"344b8ea9a50e01edc32567246a153b515f21f5cb7482a955d424bae296b006e1\" id:\"995852fbeb6822d97e825dfd03f29ad62b2106ad904b55c595388d6cf6eb42b2\" pid:5603 exited_at:{seconds:1747246228 nanos:266807400}" May 14 18:10:30.461083 containerd[1533]: time="2025-05-14T18:10:30.460998452Z" level=info msg="TaskExit event in podsandbox handler container_id:\"344b8ea9a50e01edc32567246a153b515f21f5cb7482a955d424bae296b006e1\" id:\"13daa035267a4f06a099af93e1eed6e9e9b142f850fc6a1bf0cbb67dc3d1cef7\" pid:5629 exited_at:{seconds:1747246230 nanos:460214644}" May 14 18:10:30.907371 kubelet[2802]: E0514 18:10:30.905589 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" May 14 18:10:32.620704 containerd[1533]: time="2025-05-14T18:10:32.620657362Z" level=info msg="TaskExit event in podsandbox handler container_id:\"344b8ea9a50e01edc32567246a153b515f21f5cb7482a955d424bae296b006e1\" id:\"a73b9f56431eb350f5fa3d617590c6f53074447237929df3bf76f5a4f01d76e6\" pid:5655 exited_at:{seconds:1747246232 nanos:619727495}" May 14 18:10:32.675062 sshd[4919]: Connection closed by 147.75.109.163 port 48144 May 14 18:10:32.676125 sshd-session[4917]: pam_unix(sshd:session): session closed for user core May 14 18:10:32.682171 systemd[1]: sshd@60-172.236.122.223:22-147.75.109.163:48144.service: Deactivated successfully. May 14 18:10:32.685324 systemd[1]: session-58.scope: Deactivated successfully. May 14 18:10:32.686939 systemd-logind[1515]: Session 58 logged out. Waiting for processes to exit. May 14 18:10:32.689343 systemd-logind[1515]: Removed session 58. May 14 18:10:32.905837 kubelet[2802]: E0514 18:10:32.905661 2802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9"