May 27 03:54:44.834363 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:54:44.834385 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:54:44.834394 kernel: BIOS-provided physical RAM map: May 27 03:54:44.834402 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 27 03:54:44.834408 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 27 03:54:44.834413 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 03:54:44.834419 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 27 03:54:44.834425 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 27 03:54:44.834430 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 27 03:54:44.834436 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 27 03:54:44.834442 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:54:44.834447 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 03:54:44.834455 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 27 03:54:44.834460 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:54:44.834467 kernel: NX (Execute Disable) protection: active May 27 03:54:44.834473 kernel: APIC: Static calls initialized May 27 03:54:44.834479 kernel: SMBIOS 2.8 present. May 27 03:54:44.834487 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 27 03:54:44.834493 kernel: DMI: Memory slots populated: 1/1 May 27 03:54:44.834499 kernel: Hypervisor detected: KVM May 27 03:54:44.834505 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:54:44.834510 kernel: kvm-clock: using sched offset of 5294251490 cycles May 27 03:54:44.834517 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:54:44.834534 kernel: tsc: Detected 1999.999 MHz processor May 27 03:54:44.834549 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:54:44.834556 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:54:44.834562 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 27 03:54:44.834571 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 03:54:44.834577 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:54:44.834583 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 27 03:54:44.834589 kernel: Using GB pages for direct mapping May 27 03:54:44.834595 kernel: ACPI: Early table checksum verification disabled May 27 03:54:44.834601 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 27 03:54:44.834608 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:44.834614 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:44.834620 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:44.834628 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 27 03:54:44.834634 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:44.834640 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:44.834646 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:44.834656 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:54:44.834664 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 27 03:54:44.834674 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 27 03:54:44.834681 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 27 03:54:44.834689 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 27 03:54:44.834697 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 27 03:54:44.834704 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 27 03:54:44.834712 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 27 03:54:44.834719 kernel: No NUMA configuration found May 27 03:54:44.834726 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 27 03:54:44.834736 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] May 27 03:54:44.834744 kernel: Zone ranges: May 27 03:54:44.834751 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:54:44.834759 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 27 03:54:44.834766 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 27 03:54:44.834774 kernel: Device empty May 27 03:54:44.834781 kernel: Movable zone start for each node May 27 03:54:44.834789 kernel: Early memory node ranges May 27 03:54:44.834796 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 03:54:44.834804 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 27 03:54:44.834813 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 27 03:54:44.834821 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 27 03:54:44.834829 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:54:44.834837 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 03:54:44.834844 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 27 03:54:44.834852 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 03:54:44.834859 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:54:44.834866 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:54:44.834889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 03:54:44.834897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:54:44.834904 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:54:44.834910 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:54:44.834916 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:54:44.834922 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:54:44.834928 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:54:44.834934 kernel: TSC deadline timer available May 27 03:54:44.834939 kernel: CPU topo: Max. logical packages: 1 May 27 03:54:44.834945 kernel: CPU topo: Max. logical dies: 1 May 27 03:54:44.834951 kernel: CPU topo: Max. dies per package: 1 May 27 03:54:44.834957 kernel: CPU topo: Max. threads per core: 1 May 27 03:54:44.834962 kernel: CPU topo: Num. cores per package: 2 May 27 03:54:44.834967 kernel: CPU topo: Num. threads per package: 2 May 27 03:54:44.834973 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 03:54:44.834978 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:54:44.834983 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 03:54:44.834988 kernel: kvm-guest: setup PV sched yield May 27 03:54:44.834994 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 27 03:54:44.835000 kernel: Booting paravirtualized kernel on KVM May 27 03:54:44.835006 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:54:44.835011 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 03:54:44.835017 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 03:54:44.835022 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 03:54:44.835027 kernel: pcpu-alloc: [0] 0 1 May 27 03:54:44.835033 kernel: kvm-guest: PV spinlocks enabled May 27 03:54:44.835038 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:54:44.835044 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:54:44.835051 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:54:44.835057 kernel: random: crng init done May 27 03:54:44.835062 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:54:44.835067 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:54:44.835073 kernel: Fallback order for Node 0: 0 May 27 03:54:44.835078 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 27 03:54:44.835084 kernel: Policy zone: Normal May 27 03:54:44.835089 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:54:44.835094 kernel: software IO TLB: area num 2. May 27 03:54:44.835101 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 03:54:44.835106 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:54:44.835111 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:54:44.835117 kernel: Dynamic Preempt: voluntary May 27 03:54:44.835122 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:54:44.835129 kernel: rcu: RCU event tracing is enabled. May 27 03:54:44.835135 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 03:54:44.835142 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:54:44.835148 kernel: Rude variant of Tasks RCU enabled. May 27 03:54:44.835156 kernel: Tracing variant of Tasks RCU enabled. May 27 03:54:44.835163 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:54:44.835169 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 03:54:44.835175 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:54:44.835188 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:54:44.835198 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:54:44.835206 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 03:54:44.835215 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:54:44.835223 kernel: Console: colour VGA+ 80x25 May 27 03:54:44.835231 kernel: printk: legacy console [tty0] enabled May 27 03:54:44.835239 kernel: printk: legacy console [ttyS0] enabled May 27 03:54:44.835247 kernel: ACPI: Core revision 20240827 May 27 03:54:44.835257 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 03:54:44.835265 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:54:44.835273 kernel: x2apic enabled May 27 03:54:44.835281 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:54:44.835291 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 03:54:44.835299 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 03:54:44.835307 kernel: kvm-guest: setup PV IPIs May 27 03:54:44.835314 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 03:54:44.835321 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns May 27 03:54:44.835328 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) May 27 03:54:44.835334 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 03:54:44.835341 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 03:54:44.835347 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 03:54:44.835356 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:54:44.835363 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:54:44.835369 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:54:44.835376 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 27 03:54:44.835383 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 03:54:44.835389 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 03:54:44.835396 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 03:54:44.835403 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 03:54:44.835410 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 03:54:44.835431 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:54:44.835437 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:54:44.835444 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:54:44.835451 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 27 03:54:44.835457 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:54:44.835464 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 27 03:54:44.835471 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 27 03:54:44.835477 kernel: Freeing SMP alternatives memory: 32K May 27 03:54:44.835486 kernel: pid_max: default: 32768 minimum: 301 May 27 03:54:44.835492 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:54:44.835499 kernel: landlock: Up and running. May 27 03:54:44.835506 kernel: SELinux: Initializing. May 27 03:54:44.835512 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:54:44.835519 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:54:44.835526 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 27 03:54:44.835533 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 03:54:44.835539 kernel: ... version: 0 May 27 03:54:44.835548 kernel: ... bit width: 48 May 27 03:54:44.835554 kernel: ... generic registers: 6 May 27 03:54:44.835561 kernel: ... value mask: 0000ffffffffffff May 27 03:54:44.835567 kernel: ... max period: 00007fffffffffff May 27 03:54:44.835574 kernel: ... fixed-purpose events: 0 May 27 03:54:44.835581 kernel: ... event mask: 000000000000003f May 27 03:54:44.835587 kernel: signal: max sigframe size: 3376 May 27 03:54:44.835594 kernel: rcu: Hierarchical SRCU implementation. May 27 03:54:44.835601 kernel: rcu: Max phase no-delay instances is 400. May 27 03:54:44.835608 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:54:44.835625 kernel: smp: Bringing up secondary CPUs ... May 27 03:54:44.835640 kernel: smpboot: x86: Booting SMP configuration: May 27 03:54:44.835662 kernel: .... node #0, CPUs: #1 May 27 03:54:44.835669 kernel: smp: Brought up 1 node, 2 CPUs May 27 03:54:44.835675 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) May 27 03:54:44.835682 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 227296K reserved, 0K cma-reserved) May 27 03:54:44.835689 kernel: devtmpfs: initialized May 27 03:54:44.835696 kernel: x86/mm: Memory block size: 128MB May 27 03:54:44.835702 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:54:44.835711 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 03:54:44.835718 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:54:44.835728 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:54:44.835735 kernel: audit: initializing netlink subsys (disabled) May 27 03:54:44.835741 kernel: audit: type=2000 audit(1748318081.945:1): state=initialized audit_enabled=0 res=1 May 27 03:54:44.835748 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:54:44.835754 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:54:44.835761 kernel: cpuidle: using governor menu May 27 03:54:44.835769 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:54:44.835776 kernel: dca service started, version 1.12.1 May 27 03:54:44.835783 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 27 03:54:44.835790 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 27 03:54:44.835796 kernel: PCI: Using configuration type 1 for base access May 27 03:54:44.835803 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:54:44.835810 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:54:44.835816 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:54:44.835823 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:54:44.835831 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:54:44.835838 kernel: ACPI: Added _OSI(Module Device) May 27 03:54:44.835845 kernel: ACPI: Added _OSI(Processor Device) May 27 03:54:44.835851 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:54:44.835858 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:54:44.835865 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:54:44.835891 kernel: ACPI: Interpreter enabled May 27 03:54:44.835898 kernel: ACPI: PM: (supports S0 S3 S5) May 27 03:54:44.835905 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:54:44.839828 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:54:44.839845 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:54:44.839852 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 03:54:44.839859 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:54:44.840031 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 03:54:44.840129 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 03:54:44.840219 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 03:54:44.840227 kernel: PCI host bridge to bus 0000:00 May 27 03:54:44.840333 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:54:44.840416 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:54:44.840496 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:54:44.840576 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 27 03:54:44.840676 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 27 03:54:44.840758 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 27 03:54:44.843954 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:54:44.844096 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 03:54:44.844225 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 03:54:44.844336 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 27 03:54:44.844441 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 27 03:54:44.844547 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 27 03:54:44.844650 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:54:44.844772 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 27 03:54:44.845549 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] May 27 03:54:44.845671 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 27 03:54:44.845780 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 27 03:54:44.845938 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 03:54:44.846050 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] May 27 03:54:44.846156 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 27 03:54:44.846267 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 27 03:54:44.846372 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 27 03:54:44.846487 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 03:54:44.846591 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 03:54:44.846707 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 03:54:44.846813 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] May 27 03:54:44.846947 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] May 27 03:54:44.847069 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 03:54:44.847174 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 27 03:54:44.847183 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:54:44.847190 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:54:44.847197 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:54:44.847204 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:54:44.847210 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 03:54:44.847220 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 03:54:44.847227 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 03:54:44.847234 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 03:54:44.847241 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 03:54:44.847247 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 03:54:44.847254 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 03:54:44.847261 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 03:54:44.847268 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 03:54:44.847274 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 03:54:44.847283 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 03:54:44.847289 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 03:54:44.847296 kernel: iommu: Default domain type: Translated May 27 03:54:44.847303 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:54:44.847309 kernel: PCI: Using ACPI for IRQ routing May 27 03:54:44.847316 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:54:44.847325 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 27 03:54:44.847332 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 27 03:54:44.847437 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 03:54:44.847545 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 03:54:44.847649 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:54:44.847658 kernel: vgaarb: loaded May 27 03:54:44.847665 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 03:54:44.847672 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 03:54:44.847679 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:54:44.847686 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:54:44.847692 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:54:44.847702 kernel: pnp: PnP ACPI init May 27 03:54:44.847820 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 27 03:54:44.847830 kernel: pnp: PnP ACPI: found 5 devices May 27 03:54:44.847837 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:54:44.847844 kernel: NET: Registered PF_INET protocol family May 27 03:54:44.847851 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:54:44.847858 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 03:54:44.847865 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:54:44.847907 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:54:44.847915 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 03:54:44.847921 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 03:54:44.847928 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:54:44.847935 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:54:44.847942 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:54:44.847949 kernel: NET: Registered PF_XDP protocol family May 27 03:54:44.848055 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:54:44.848153 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:54:44.848252 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:54:44.848347 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 27 03:54:44.848442 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 27 03:54:44.848539 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 27 03:54:44.848548 kernel: PCI: CLS 0 bytes, default 64 May 27 03:54:44.848555 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 27 03:54:44.848562 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 27 03:54:44.848569 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns May 27 03:54:44.848578 kernel: Initialise system trusted keyrings May 27 03:54:44.848585 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 03:54:44.848591 kernel: Key type asymmetric registered May 27 03:54:44.848598 kernel: Asymmetric key parser 'x509' registered May 27 03:54:44.848605 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:54:44.848612 kernel: io scheduler mq-deadline registered May 27 03:54:44.848619 kernel: io scheduler kyber registered May 27 03:54:44.848626 kernel: io scheduler bfq registered May 27 03:54:44.848633 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:54:44.848640 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 03:54:44.848649 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 03:54:44.848656 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:54:44.848663 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:54:44.848669 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:54:44.848676 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:54:44.848683 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:54:44.848798 kernel: rtc_cmos 00:03: RTC can wake from S4 May 27 03:54:44.848809 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 03:54:44.848935 kernel: rtc_cmos 00:03: registered as rtc0 May 27 03:54:44.849039 kernel: rtc_cmos 00:03: setting system clock to 2025-05-27T03:54:44 UTC (1748318084) May 27 03:54:44.849143 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 27 03:54:44.849153 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 03:54:44.849160 kernel: NET: Registered PF_INET6 protocol family May 27 03:54:44.849166 kernel: Segment Routing with IPv6 May 27 03:54:44.849173 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:54:44.849180 kernel: NET: Registered PF_PACKET protocol family May 27 03:54:44.849189 kernel: Key type dns_resolver registered May 27 03:54:44.849196 kernel: IPI shorthand broadcast: enabled May 27 03:54:44.849203 kernel: sched_clock: Marking stable (2715002439, 217270374)->(2962289200, -30016387) May 27 03:54:44.849209 kernel: registered taskstats version 1 May 27 03:54:44.849216 kernel: Loading compiled-in X.509 certificates May 27 03:54:44.849223 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:54:44.849230 kernel: Demotion targets for Node 0: null May 27 03:54:44.849236 kernel: Key type .fscrypt registered May 27 03:54:44.849243 kernel: Key type fscrypt-provisioning registered May 27 03:54:44.849250 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:54:44.849258 kernel: ima: Allocated hash algorithm: sha1 May 27 03:54:44.849265 kernel: ima: No architecture policies found May 27 03:54:44.849272 kernel: clk: Disabling unused clocks May 27 03:54:44.849278 kernel: Warning: unable to open an initial console. May 27 03:54:44.849285 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:54:44.849292 kernel: Write protecting the kernel read-only data: 24576k May 27 03:54:44.849299 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:54:44.849305 kernel: Run /init as init process May 27 03:54:44.849314 kernel: with arguments: May 27 03:54:44.849320 kernel: /init May 27 03:54:44.849327 kernel: with environment: May 27 03:54:44.849334 kernel: HOME=/ May 27 03:54:44.849340 kernel: TERM=linux May 27 03:54:44.849359 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:54:44.849369 systemd[1]: Successfully made /usr/ read-only. May 27 03:54:44.849379 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:54:44.849389 systemd[1]: Detected virtualization kvm. May 27 03:54:44.849396 systemd[1]: Detected architecture x86-64. May 27 03:54:44.849403 systemd[1]: Running in initrd. May 27 03:54:44.849410 systemd[1]: No hostname configured, using default hostname. May 27 03:54:44.849418 systemd[1]: Hostname set to . May 27 03:54:44.849425 systemd[1]: Initializing machine ID from random generator. May 27 03:54:44.849432 systemd[1]: Queued start job for default target initrd.target. May 27 03:54:44.849440 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:54:44.849449 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:54:44.849457 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:54:44.849464 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:54:44.849471 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:54:44.849479 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:54:44.849488 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:54:44.849495 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:54:44.849505 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:54:44.849512 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:54:44.849519 systemd[1]: Reached target paths.target - Path Units. May 27 03:54:44.849526 systemd[1]: Reached target slices.target - Slice Units. May 27 03:54:44.849534 systemd[1]: Reached target swap.target - Swaps. May 27 03:54:44.849541 systemd[1]: Reached target timers.target - Timer Units. May 27 03:54:44.849548 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:54:44.849555 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:54:44.849564 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:54:44.849572 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:54:44.849579 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:54:44.849586 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:54:44.849595 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:54:44.849603 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:54:44.849613 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:54:44.849621 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:54:44.849628 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:54:44.849636 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:54:44.849643 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:54:44.849651 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:54:44.849658 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:54:44.849665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:54:44.849675 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:54:44.849701 systemd-journald[206]: Collecting audit messages is disabled. May 27 03:54:44.849721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:54:44.849729 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:54:44.849737 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:54:44.849745 systemd-journald[206]: Journal started May 27 03:54:44.849764 systemd-journald[206]: Runtime Journal (/run/log/journal/ae6ff2967777492c9fdb951f340a21da) is 8M, max 78.5M, 70.5M free. May 27 03:54:44.834571 systemd-modules-load[207]: Inserted module 'overlay' May 27 03:54:44.856916 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:54:44.863994 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:54:44.864975 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:54:44.933080 kernel: Bridge firewalling registered May 27 03:54:44.868888 systemd-modules-load[207]: Inserted module 'br_netfilter' May 27 03:54:44.933707 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:54:44.940176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:54:44.941943 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:54:44.945159 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:54:44.947423 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:54:44.948117 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:54:44.952972 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:54:44.954720 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:54:44.964124 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:54:44.967989 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:54:44.969129 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:54:44.973409 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:54:44.977585 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:54:44.992784 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:54:45.018804 systemd-resolved[233]: Positive Trust Anchors: May 27 03:54:45.019717 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:54:45.019743 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:54:45.024723 systemd-resolved[233]: Defaulting to hostname 'linux'. May 27 03:54:45.025621 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:54:45.026658 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:54:45.084917 kernel: SCSI subsystem initialized May 27 03:54:45.092898 kernel: Loading iSCSI transport class v2.0-870. May 27 03:54:45.103903 kernel: iscsi: registered transport (tcp) May 27 03:54:45.122079 kernel: iscsi: registered transport (qla4xxx) May 27 03:54:45.122096 kernel: QLogic iSCSI HBA Driver May 27 03:54:45.140662 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:54:45.153080 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:54:45.155595 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:54:45.202946 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:54:45.205165 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:54:45.252889 kernel: raid6: avx2x4 gen() 33632 MB/s May 27 03:54:45.270893 kernel: raid6: avx2x2 gen() 30719 MB/s May 27 03:54:45.289290 kernel: raid6: avx2x1 gen() 19992 MB/s May 27 03:54:45.289303 kernel: raid6: using algorithm avx2x4 gen() 33632 MB/s May 27 03:54:45.308635 kernel: raid6: .... xor() 4223 MB/s, rmw enabled May 27 03:54:45.308664 kernel: raid6: using avx2x2 recovery algorithm May 27 03:54:45.328904 kernel: xor: automatically using best checksumming function avx May 27 03:54:45.444905 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:54:45.453389 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:54:45.455652 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:54:45.473041 systemd-udevd[454]: Using default interface naming scheme 'v255'. May 27 03:54:45.477738 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:54:45.480729 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:54:45.507423 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation May 27 03:54:45.535523 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:54:45.537560 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:54:45.594687 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:54:45.597904 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:54:45.652892 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues May 27 03:54:45.665235 kernel: scsi host0: Virtio SCSI HBA May 27 03:54:45.679892 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:54:45.681897 kernel: libata version 3.00 loaded. May 27 03:54:45.689291 kernel: AES CTR mode by8 optimization enabled May 27 03:54:45.699905 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 03:54:45.705966 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 27 03:54:45.706011 kernel: ahci 0000:00:1f.2: version 3.0 May 27 03:54:45.706173 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 03:54:45.712772 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 03:54:45.713037 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 03:54:45.713156 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 03:54:45.716952 kernel: scsi host1: ahci May 27 03:54:45.717131 kernel: scsi host2: ahci May 27 03:54:45.721896 kernel: scsi host3: ahci May 27 03:54:45.724046 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:54:45.727077 kernel: scsi host4: ahci May 27 03:54:45.727231 kernel: scsi host5: ahci May 27 03:54:45.727372 kernel: scsi host6: ahci May 27 03:54:45.724218 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:54:45.741786 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 lpm-pol 0 May 27 03:54:45.741806 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 lpm-pol 0 May 27 03:54:45.741815 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 lpm-pol 0 May 27 03:54:45.741822 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 lpm-pol 0 May 27 03:54:45.741830 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 lpm-pol 0 May 27 03:54:45.741842 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 lpm-pol 0 May 27 03:54:45.742378 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:54:45.744491 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:54:45.837615 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:54:45.854357 kernel: sd 0:0:0:0: Power-on or device reset occurred May 27 03:54:45.857128 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 27 03:54:45.857286 kernel: sd 0:0:0:0: [sda] Write Protect is off May 27 03:54:45.859020 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 27 03:54:45.859362 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 27 03:54:45.870052 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:54:45.870075 kernel: GPT:9289727 != 167739391 May 27 03:54:45.870086 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:54:45.871601 kernel: GPT:9289727 != 167739391 May 27 03:54:45.873717 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:54:45.873736 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:54:45.873746 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 27 03:54:45.925709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:54:46.048057 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 03:54:46.048096 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 27 03:54:46.048108 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 03:54:46.048118 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 03:54:46.049615 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 03:54:46.049674 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 03:54:46.107720 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 27 03:54:46.123927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 27 03:54:46.131592 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:54:46.140203 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 27 03:54:46.147849 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 27 03:54:46.148488 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 27 03:54:46.150926 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:54:46.151534 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:54:46.152704 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:54:46.154432 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:54:46.157026 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:54:46.171555 disk-uuid[631]: Primary Header is updated. May 27 03:54:46.171555 disk-uuid[631]: Secondary Entries is updated. May 27 03:54:46.171555 disk-uuid[631]: Secondary Header is updated. May 27 03:54:46.177497 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:54:46.180310 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:54:46.194891 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:54:47.194470 disk-uuid[634]: The operation has completed successfully. May 27 03:54:47.196961 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:54:47.242390 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:54:47.242497 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:54:47.262058 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:54:47.281732 sh[653]: Success May 27 03:54:47.298528 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:54:47.298559 kernel: device-mapper: uevent: version 1.0.3 May 27 03:54:47.299149 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:54:47.309893 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 03:54:47.352203 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:54:47.354091 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:54:47.367671 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:54:47.378394 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:54:47.378415 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (665) May 27 03:54:47.381896 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:54:47.384927 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:54:47.384941 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:54:47.394069 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:54:47.394963 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:54:47.395894 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:54:47.396500 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:54:47.406694 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:54:47.427571 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (696) May 27 03:54:47.427594 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:54:47.430388 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:54:47.430406 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:54:47.442910 kernel: BTRFS info (device sda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:54:47.444633 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:54:47.445893 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:54:47.532312 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:54:47.536496 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:54:47.542333 ignition[759]: Ignition 2.21.0 May 27 03:54:47.542348 ignition[759]: Stage: fetch-offline May 27 03:54:47.542378 ignition[759]: no configs at "/usr/lib/ignition/base.d" May 27 03:54:47.542386 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:47.542457 ignition[759]: parsed url from cmdline: "" May 27 03:54:47.546058 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:54:47.542460 ignition[759]: no config URL provided May 27 03:54:47.542464 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:54:47.542471 ignition[759]: no config at "/usr/lib/ignition/user.ign" May 27 03:54:47.542474 ignition[759]: failed to fetch config: resource requires networking May 27 03:54:47.542597 ignition[759]: Ignition finished successfully May 27 03:54:47.569711 systemd-networkd[839]: lo: Link UP May 27 03:54:47.569722 systemd-networkd[839]: lo: Gained carrier May 27 03:54:47.570936 systemd-networkd[839]: Enumeration completed May 27 03:54:47.571642 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:54:47.571907 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:54:47.571919 systemd-networkd[839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:54:47.573940 systemd-networkd[839]: eth0: Link UP May 27 03:54:47.573943 systemd-networkd[839]: eth0: Gained carrier May 27 03:54:47.573950 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:54:47.574994 systemd[1]: Reached target network.target - Network. May 27 03:54:47.577155 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 03:54:47.599648 ignition[844]: Ignition 2.21.0 May 27 03:54:47.599660 ignition[844]: Stage: fetch May 27 03:54:47.599769 ignition[844]: no configs at "/usr/lib/ignition/base.d" May 27 03:54:47.599778 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:47.599834 ignition[844]: parsed url from cmdline: "" May 27 03:54:47.599838 ignition[844]: no config URL provided May 27 03:54:47.599841 ignition[844]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:54:47.599848 ignition[844]: no config at "/usr/lib/ignition/user.ign" May 27 03:54:47.599894 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #1 May 27 03:54:47.600313 ignition[844]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 27 03:54:47.800536 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #2 May 27 03:54:47.800743 ignition[844]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 27 03:54:48.058949 systemd-networkd[839]: eth0: DHCPv4 address 172.234.212.30/24, gateway 172.234.212.1 acquired from 23.40.196.242 May 27 03:54:48.201340 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #3 May 27 03:54:48.293818 ignition[844]: PUT result: OK May 27 03:54:48.293928 ignition[844]: GET http://169.254.169.254/v1/user-data: attempt #1 May 27 03:54:48.403722 ignition[844]: GET result: OK May 27 03:54:48.404476 ignition[844]: parsing config with SHA512: 45262b3270fa2153ea7570421398d6f3f9c69b66db4abb286e950c26f9ebd67fdeab6b3254f88f4567ab3affd232c9d363ae74a73f4d65aae25df5e65929f8b1 May 27 03:54:48.408207 unknown[844]: fetched base config from "system" May 27 03:54:48.408812 unknown[844]: fetched base config from "system" May 27 03:54:48.408822 unknown[844]: fetched user config from "akamai" May 27 03:54:48.409089 ignition[844]: fetch: fetch complete May 27 03:54:48.409093 ignition[844]: fetch: fetch passed May 27 03:54:48.409136 ignition[844]: Ignition finished successfully May 27 03:54:48.412331 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 03:54:48.414202 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:54:48.458921 ignition[852]: Ignition 2.21.0 May 27 03:54:48.458937 ignition[852]: Stage: kargs May 27 03:54:48.459079 ignition[852]: no configs at "/usr/lib/ignition/base.d" May 27 03:54:48.459092 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:48.462465 ignition[852]: kargs: kargs passed May 27 03:54:48.462532 ignition[852]: Ignition finished successfully May 27 03:54:48.464556 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:54:48.467005 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:54:48.484343 ignition[859]: Ignition 2.21.0 May 27 03:54:48.484353 ignition[859]: Stage: disks May 27 03:54:48.484473 ignition[859]: no configs at "/usr/lib/ignition/base.d" May 27 03:54:48.484485 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:48.486949 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:54:48.485046 ignition[859]: disks: disks passed May 27 03:54:48.488559 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:54:48.485081 ignition[859]: Ignition finished successfully May 27 03:54:48.489550 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:54:48.490572 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:54:48.491955 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:54:48.492816 systemd[1]: Reached target basic.target - Basic System. May 27 03:54:48.494491 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:54:48.519145 systemd-fsck[867]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:54:48.521306 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:54:48.523202 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:54:48.623896 kernel: EXT4-fs (sda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:54:48.625018 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:54:48.626006 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:54:48.627952 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:54:48.630944 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:54:48.631843 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:54:48.631904 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:54:48.631942 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:54:48.640320 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:54:48.642755 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:54:48.651905 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (875) May 27 03:54:48.651933 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:54:48.654928 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:54:48.654947 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:54:48.664332 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:54:48.699747 initrd-setup-root[899]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:54:48.705943 initrd-setup-root[906]: cut: /sysroot/etc/group: No such file or directory May 27 03:54:48.711376 initrd-setup-root[913]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:54:48.716838 initrd-setup-root[920]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:54:48.816680 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:54:48.819224 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:54:48.820983 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:54:48.835530 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:54:48.838938 kernel: BTRFS info (device sda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:54:48.853459 systemd-networkd[839]: eth0: Gained IPv6LL May 27 03:54:48.858677 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:54:48.867481 ignition[989]: INFO : Ignition 2.21.0 May 27 03:54:48.868252 ignition[989]: INFO : Stage: mount May 27 03:54:48.869036 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:54:48.869718 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:48.870430 ignition[989]: INFO : mount: mount passed May 27 03:54:48.870430 ignition[989]: INFO : Ignition finished successfully May 27 03:54:48.872123 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:54:48.874182 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:54:49.626693 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:54:49.654100 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (1002) May 27 03:54:49.654190 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:54:49.657916 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:54:49.657940 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:54:49.665585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:54:49.695844 ignition[1018]: INFO : Ignition 2.21.0 May 27 03:54:49.695844 ignition[1018]: INFO : Stage: files May 27 03:54:49.697027 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:54:49.697027 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:49.697027 ignition[1018]: DEBUG : files: compiled without relabeling support, skipping May 27 03:54:49.699247 ignition[1018]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:54:49.699247 ignition[1018]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:54:49.700828 ignition[1018]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:54:49.700828 ignition[1018]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:54:49.702361 ignition[1018]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:54:49.701018 unknown[1018]: wrote ssh authorized keys file for user: core May 27 03:54:49.703856 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 03:54:49.703856 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 27 03:54:49.999548 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:54:50.526891 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 03:54:50.528037 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 03:54:50.528037 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 27 03:54:50.772414 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 03:54:50.828383 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 03:54:50.829359 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 03:54:50.829359 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:54:50.829359 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:54:50.829359 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:54:50.829359 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:54:50.829359 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:54:50.829359 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:54:50.835556 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:54:50.835556 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:54:50.835556 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:54:50.835556 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:54:50.835556 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:54:50.835556 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:54:50.835556 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 27 03:54:51.224225 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 03:54:51.463039 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:54:51.463039 ignition[1018]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 03:54:51.465136 ignition[1018]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:54:51.466216 ignition[1018]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:54:51.466216 ignition[1018]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 03:54:51.466216 ignition[1018]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 27 03:54:51.466216 ignition[1018]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 27 03:54:51.466216 ignition[1018]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 27 03:54:51.466216 ignition[1018]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 27 03:54:51.466216 ignition[1018]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 27 03:54:51.466216 ignition[1018]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:54:51.466216 ignition[1018]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:54:51.466216 ignition[1018]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:54:51.466216 ignition[1018]: INFO : files: files passed May 27 03:54:51.466216 ignition[1018]: INFO : Ignition finished successfully May 27 03:54:51.468711 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:54:51.473219 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:54:51.478865 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:54:51.488507 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:54:51.490037 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:54:51.503154 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:54:51.503154 initrd-setup-root-after-ignition[1049]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:54:51.505329 initrd-setup-root-after-ignition[1053]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:54:51.506533 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:54:51.507585 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:54:51.509432 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:54:51.549398 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:54:51.549558 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:54:51.551381 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:54:51.552668 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:54:51.553583 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:54:51.554365 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:54:51.597815 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:54:51.600020 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:54:51.623707 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:54:51.624279 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:54:51.625499 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:54:51.627009 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:54:51.627147 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:54:51.628694 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:54:51.629696 systemd[1]: Stopped target basic.target - Basic System. May 27 03:54:51.630958 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:54:51.632036 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:54:51.633029 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:54:51.634541 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:54:51.635981 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:54:51.637635 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:54:51.639061 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:54:51.640229 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:54:51.641403 systemd[1]: Stopped target swap.target - Swaps. May 27 03:54:51.642530 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:54:51.642645 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:54:51.644285 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:54:51.645176 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:54:51.646763 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:54:51.647110 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:54:51.648426 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:54:51.648543 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:54:51.650114 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:54:51.650212 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:54:51.650918 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:54:51.651007 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:54:51.653954 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:54:51.656455 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:54:51.659955 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:54:51.660089 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:54:51.662581 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:54:51.662695 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:54:51.672962 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:54:51.673258 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:54:51.683184 ignition[1073]: INFO : Ignition 2.21.0 May 27 03:54:51.685238 ignition[1073]: INFO : Stage: umount May 27 03:54:51.685238 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:54:51.685238 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:54:51.711101 ignition[1073]: INFO : umount: umount passed May 27 03:54:51.711101 ignition[1073]: INFO : Ignition finished successfully May 27 03:54:51.692277 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:54:51.704963 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:54:51.705183 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:54:51.710546 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:54:51.710630 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:54:51.712532 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:54:51.712604 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:54:51.713924 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:54:51.713968 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:54:51.714861 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 03:54:51.714918 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 03:54:51.715930 systemd[1]: Stopped target network.target - Network. May 27 03:54:51.716846 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:54:51.716925 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:54:51.717948 systemd[1]: Stopped target paths.target - Path Units. May 27 03:54:51.719266 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:54:51.723076 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:54:51.723810 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:54:51.724955 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:54:51.726328 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:54:51.726368 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:54:51.727332 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:54:51.727368 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:54:51.728472 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:54:51.728514 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:54:51.729722 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:54:51.729761 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:54:51.730726 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:54:51.730767 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:54:51.731854 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:54:51.732869 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:54:51.737911 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:54:51.738032 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:54:51.741792 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:54:51.742105 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:54:51.742209 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:54:51.743820 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:54:51.744509 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:54:51.745350 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:54:51.745385 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:54:51.747183 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:54:51.748385 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:54:51.748428 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:54:51.751019 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:54:51.751060 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:54:51.752956 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:54:51.752997 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:54:51.753675 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:54:51.753719 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:54:51.755188 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:54:51.758058 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:54:51.758300 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:54:51.776422 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:54:51.776619 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:54:51.780341 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:54:51.780508 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:54:51.782137 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:54:51.782194 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:54:51.783443 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:54:51.783476 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:54:51.784757 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:54:51.784800 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:54:51.786816 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:54:51.786893 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:54:51.788177 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:54:51.788222 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:54:51.789845 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:54:51.792155 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:54:51.792201 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:54:51.796729 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:54:51.796783 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:54:51.798950 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 03:54:51.798989 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:54:51.800555 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:54:51.800594 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:54:51.801570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:54:51.801611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:54:51.807726 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 03:54:51.807773 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 03:54:51.807810 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 03:54:51.807847 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:54:51.810474 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:54:51.810754 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:54:51.812380 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:54:51.814492 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:54:51.842321 systemd[1]: Switching root. May 27 03:54:51.871599 systemd-journald[206]: Journal stopped May 27 03:54:52.961679 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). May 27 03:54:52.961705 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:54:52.961717 kernel: SELinux: policy capability open_perms=1 May 27 03:54:52.961729 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:54:52.961737 kernel: SELinux: policy capability always_check_network=0 May 27 03:54:52.961746 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:54:52.961755 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:54:52.961764 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:54:52.961772 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:54:52.961781 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:54:52.961791 kernel: audit: type=1403 audit(1748318092.039:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:54:52.961801 systemd[1]: Successfully loaded SELinux policy in 60.567ms. May 27 03:54:52.961811 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.774ms. May 27 03:54:52.961821 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:54:52.961832 systemd[1]: Detected virtualization kvm. May 27 03:54:52.961843 systemd[1]: Detected architecture x86-64. May 27 03:54:52.961852 systemd[1]: Detected first boot. May 27 03:54:52.961862 systemd[1]: Initializing machine ID from random generator. May 27 03:54:52.961944 zram_generator::config[1116]: No configuration found. May 27 03:54:52.961958 kernel: Guest personality initialized and is inactive May 27 03:54:52.961967 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:54:52.961976 kernel: Initialized host personality May 27 03:54:52.961987 kernel: NET: Registered PF_VSOCK protocol family May 27 03:54:52.961996 systemd[1]: Populated /etc with preset unit settings. May 27 03:54:52.962006 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:54:52.962015 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:54:52.962024 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:54:52.962033 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:54:52.962043 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:54:52.962054 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:54:52.962064 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:54:52.962073 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:54:52.962084 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:54:52.962094 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:54:52.962103 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:54:52.962112 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:54:52.962123 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:54:52.962133 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:54:52.962142 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:54:52.962152 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:54:52.962165 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:54:52.962175 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:54:52.962184 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:54:52.962194 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:54:52.962205 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:54:52.962406 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:54:52.962415 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:54:52.962424 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:54:52.962434 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:54:52.962443 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:54:52.962453 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:54:52.962462 systemd[1]: Reached target slices.target - Slice Units. May 27 03:54:52.962473 systemd[1]: Reached target swap.target - Swaps. May 27 03:54:52.962484 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:54:52.962493 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:54:52.962503 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:54:52.962513 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:54:52.962524 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:54:52.962534 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:54:52.962544 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:54:52.962554 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:54:52.962563 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:54:52.962573 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:54:52.962582 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:52.962592 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:54:52.962603 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:54:52.962612 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:54:52.962622 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:54:52.962632 systemd[1]: Reached target machines.target - Containers. May 27 03:54:52.962641 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:54:52.962651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:54:52.962660 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:54:52.962670 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:54:52.962681 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:54:52.962691 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:54:52.962701 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:54:52.962711 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:54:52.962720 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:54:52.962730 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:54:52.962740 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:54:52.962750 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:54:52.962759 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:54:52.962770 kernel: ACPI: bus type drm_connector registered May 27 03:54:52.962780 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:54:52.962790 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:54:52.962799 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:54:52.962809 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:54:52.962819 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:54:52.962828 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:54:52.962837 kernel: fuse: init (API version 7.41) May 27 03:54:52.962848 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:54:52.962858 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:54:52.962867 kernel: loop: module loaded May 27 03:54:52.962890 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:54:52.962900 systemd[1]: Stopped verity-setup.service. May 27 03:54:52.962910 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:52.962940 systemd-journald[1200]: Collecting audit messages is disabled. May 27 03:54:52.962962 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:54:52.962973 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:54:52.962983 systemd-journald[1200]: Journal started May 27 03:54:52.963002 systemd-journald[1200]: Runtime Journal (/run/log/journal/3ae7cf0f4fa84e5780e067a1e01e70bc) is 8M, max 78.5M, 70.5M free. May 27 03:54:52.623380 systemd[1]: Queued start job for default target multi-user.target. May 27 03:54:52.637102 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 27 03:54:52.638030 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:54:52.969909 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:54:52.970870 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:54:52.972027 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:54:52.972641 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:54:52.974511 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:54:52.975345 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:54:52.976192 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:54:52.976399 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:54:52.978227 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:54:52.978447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:54:52.979812 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:54:52.980558 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:54:52.981456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:54:52.981655 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:54:52.983568 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:54:52.983833 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:54:52.985372 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:54:52.986179 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:54:52.986375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:54:52.988467 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:54:52.990326 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:54:53.016981 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:54:53.018726 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:54:53.021114 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:54:53.024967 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:54:53.025554 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:54:53.025581 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:54:53.028801 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:54:53.033736 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:54:53.035048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:54:53.036472 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:54:53.041166 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:54:53.042945 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:54:53.047257 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:54:53.047972 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:54:53.050452 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:54:53.061125 systemd-journald[1200]: Time spent on flushing to /var/log/journal/3ae7cf0f4fa84e5780e067a1e01e70bc is 75.861ms for 999 entries. May 27 03:54:53.061125 systemd-journald[1200]: System Journal (/var/log/journal/3ae7cf0f4fa84e5780e067a1e01e70bc) is 8M, max 195.6M, 187.6M free. May 27 03:54:53.158921 systemd-journald[1200]: Received client request to flush runtime journal. May 27 03:54:53.158973 kernel: loop0: detected capacity change from 0 to 113872 May 27 03:54:53.054156 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:54:53.057187 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:54:53.059607 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:54:53.060417 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:54:53.062824 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:54:53.147358 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:54:53.148291 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:54:53.150234 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:54:53.163340 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:54:53.178694 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:54:53.177274 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:54:53.196649 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 27 03:54:53.196671 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 27 03:54:53.200033 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:54:53.209034 kernel: loop1: detected capacity change from 0 to 146240 May 27 03:54:53.211993 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:54:53.218597 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:54:53.223749 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:54:53.257923 kernel: loop2: detected capacity change from 0 to 224512 May 27 03:54:53.276803 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:54:53.281140 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:54:53.316689 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 27 03:54:53.317166 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 27 03:54:53.322241 kernel: loop3: detected capacity change from 0 to 8 May 27 03:54:53.322726 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:54:53.339906 kernel: loop4: detected capacity change from 0 to 113872 May 27 03:54:53.359913 kernel: loop5: detected capacity change from 0 to 146240 May 27 03:54:53.384002 kernel: loop6: detected capacity change from 0 to 224512 May 27 03:54:53.412184 kernel: loop7: detected capacity change from 0 to 8 May 27 03:54:53.413533 (sd-merge)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 27 03:54:53.414499 (sd-merge)[1267]: Merged extensions into '/usr'. May 27 03:54:53.419024 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:54:53.419127 systemd[1]: Reloading... May 27 03:54:53.539911 zram_generator::config[1296]: No configuration found. May 27 03:54:53.636974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:54:53.663902 ldconfig[1235]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:54:53.708648 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:54:53.709030 systemd[1]: Reloading finished in 289 ms. May 27 03:54:53.735161 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:54:53.736459 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:54:53.755997 systemd[1]: Starting ensure-sysext.service... May 27 03:54:53.757795 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:54:53.795010 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... May 27 03:54:53.795030 systemd[1]: Reloading... May 27 03:54:53.798968 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:54:53.799017 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:54:53.799473 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:54:53.799740 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:54:53.801532 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:54:53.801896 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 27 03:54:53.802020 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 27 03:54:53.811650 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:54:53.811664 systemd-tmpfiles[1337]: Skipping /boot May 27 03:54:53.837603 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:54:53.837981 systemd-tmpfiles[1337]: Skipping /boot May 27 03:54:53.906987 zram_generator::config[1367]: No configuration found. May 27 03:54:53.999044 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:54:54.070829 systemd[1]: Reloading finished in 275 ms. May 27 03:54:54.091163 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:54:54.104408 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:54:54.114921 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:54:54.118087 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:54:54.125836 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:54:54.130045 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:54:54.133194 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:54:54.137085 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:54:54.141134 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:54.141291 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:54:54.142948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:54:54.151668 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:54:54.160958 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:54:54.161686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:54:54.161779 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:54:54.161861 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:54.166268 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:54.166427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:54:54.166567 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:54:54.166862 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:54:54.173976 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:54:54.174515 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:54.180703 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:54.181308 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:54:54.191807 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:54:54.195029 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:54:54.195126 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:54:54.195249 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:54:54.198573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:54:54.203774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:54:54.216932 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:54:54.219335 systemd[1]: Finished ensure-sysext.service. May 27 03:54:54.220093 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:54:54.228749 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 03:54:54.230042 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:54:54.233437 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:54:54.235229 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:54:54.237406 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:54:54.238638 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:54:54.244423 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:54:54.248808 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:54:54.250806 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:54:54.251485 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:54:54.258642 systemd-udevd[1413]: Using default interface naming scheme 'v255'. May 27 03:54:54.276390 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:54:54.278679 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:54:54.288735 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:54:54.288935 augenrules[1449]: No rules May 27 03:54:54.290712 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:54:54.291059 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:54:54.297954 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:54:54.311950 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:54:54.319030 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:54:54.501401 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:54:54.522373 systemd-resolved[1412]: Positive Trust Anchors: May 27 03:54:54.522735 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:54:54.522849 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:54:54.531767 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 03:54:54.533105 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:54:54.541936 systemd-resolved[1412]: Defaulting to hostname 'linux'. May 27 03:54:54.546944 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:54:54.548305 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:54:54.549978 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:54:54.550584 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:54:54.552112 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:54:54.552531 systemd-networkd[1465]: lo: Link UP May 27 03:54:54.552536 systemd-networkd[1465]: lo: Gained carrier May 27 03:54:54.553272 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:54:54.553440 systemd-networkd[1465]: Enumeration completed May 27 03:54:54.554072 systemd-timesyncd[1436]: No network connectivity, watching for changes. May 27 03:54:54.554287 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:54:54.555503 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:54:54.556745 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:54:54.557716 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:54:54.557820 systemd[1]: Reached target paths.target - Path Units. May 27 03:54:54.558792 systemd[1]: Reached target timers.target - Timer Units. May 27 03:54:54.562133 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:54:54.564549 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:54:54.573011 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:54:54.575152 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:54:54.576666 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:54:54.587006 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:54:54.588477 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:54:54.591057 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:54:54.592831 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:54:54.600551 systemd[1]: Reached target network.target - Network. May 27 03:54:54.601784 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:54:54.602666 systemd[1]: Reached target basic.target - Basic System. May 27 03:54:54.603823 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:54:54.603856 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:54:54.606111 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:54:54.611101 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 03:54:54.613080 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:54:54.621820 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:54:54.623682 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:54:54.626208 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:54:54.627234 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:54:54.628850 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:54:54.637800 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:54:54.640656 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:54:54.648088 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:54:54.651137 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:54:54.666212 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:54:54.674321 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:54:54.685142 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:54:54.688552 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:54:54.689088 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:54:54.692165 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:54:54.695074 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:54:54.706160 google_oslogin_nss_cache[1509]: oslogin_cache_refresh[1509]: Refreshing passwd entry cache May 27 03:54:54.707911 oslogin_cache_refresh[1509]: Refreshing passwd entry cache May 27 03:54:54.712444 google_oslogin_nss_cache[1509]: oslogin_cache_refresh[1509]: Failure getting users, quitting May 27 03:54:54.712505 oslogin_cache_refresh[1509]: Failure getting users, quitting May 27 03:54:54.712561 google_oslogin_nss_cache[1509]: oslogin_cache_refresh[1509]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:54:54.712588 oslogin_cache_refresh[1509]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:54:54.712988 google_oslogin_nss_cache[1509]: oslogin_cache_refresh[1509]: Refreshing group entry cache May 27 03:54:54.714889 oslogin_cache_refresh[1509]: Refreshing group entry cache May 27 03:54:54.715927 google_oslogin_nss_cache[1509]: oslogin_cache_refresh[1509]: Failure getting groups, quitting May 27 03:54:54.715927 google_oslogin_nss_cache[1509]: oslogin_cache_refresh[1509]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:54:54.715353 oslogin_cache_refresh[1509]: Failure getting groups, quitting May 27 03:54:54.715362 oslogin_cache_refresh[1509]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:54:54.733689 coreos-metadata[1503]: May 27 03:54:54.733 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 27 03:54:54.737394 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:54:54.738222 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:54:54.738485 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:54:54.739452 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:54:54.740294 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:54:54.757686 jq[1507]: false May 27 03:54:54.757898 jq[1521]: true May 27 03:54:54.773164 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:54:54.773618 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:54:54.796261 (ntainerd)[1541]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:54:54.816916 jq[1540]: true May 27 03:54:54.841278 update_engine[1518]: I20250527 03:54:54.830775 1518 main.cc:92] Flatcar Update Engine starting May 27 03:54:54.847736 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:54:54.850432 tar[1533]: linux-amd64/LICENSE May 27 03:54:54.851036 tar[1533]: linux-amd64/helm May 27 03:54:54.875555 dbus-daemon[1505]: [system] SELinux support is enabled May 27 03:54:54.875763 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:54:54.880864 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:54:54.880934 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:54:54.881567 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:54:54.881582 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:54:54.899589 systemd[1]: Started update-engine.service - Update Engine. May 27 03:54:54.902024 update_engine[1518]: I20250527 03:54:54.901969 1518 update_check_scheduler.cc:74] Next update check in 3m16s May 27 03:54:54.904948 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:54:54.906456 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:54:54.907331 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:54:54.944693 bash[1570]: Updated "/home/core/.ssh/authorized_keys" May 27 03:54:54.953649 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:54:54.959160 extend-filesystems[1508]: Found loop4 May 27 03:54:54.962480 extend-filesystems[1508]: Found loop5 May 27 03:54:54.962480 extend-filesystems[1508]: Found loop6 May 27 03:54:54.962480 extend-filesystems[1508]: Found loop7 May 27 03:54:54.962480 extend-filesystems[1508]: Found sda May 27 03:54:54.962480 extend-filesystems[1508]: Found sda1 May 27 03:54:54.962480 extend-filesystems[1508]: Found sda2 May 27 03:54:54.962480 extend-filesystems[1508]: Found sda3 May 27 03:54:54.962480 extend-filesystems[1508]: Found usr May 27 03:54:54.962480 extend-filesystems[1508]: Found sda4 May 27 03:54:54.962480 extend-filesystems[1508]: Found sda6 May 27 03:54:54.962480 extend-filesystems[1508]: Found sda7 May 27 03:54:54.962480 extend-filesystems[1508]: Found sda9 May 27 03:54:54.962480 extend-filesystems[1508]: Checking size of /dev/sda9 May 27 03:54:54.961698 systemd[1]: Starting sshkeys.service... May 27 03:54:55.001901 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 03:54:54.998407 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 03:54:55.002172 extend-filesystems[1508]: Resized partition /dev/sda9 May 27 03:54:55.001929 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 03:54:55.005027 extend-filesystems[1578]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:54:55.015918 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 27 03:54:55.021742 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:54:55.025318 systemd-networkd[1465]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:54:55.028258 systemd-networkd[1465]: eth0: Link UP May 27 03:54:55.034200 systemd-networkd[1465]: eth0: Gained carrier May 27 03:54:55.034220 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:54:55.036897 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:54:55.041912 kernel: ACPI: button: Power Button [PWRF] May 27 03:54:55.094204 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 27 03:54:55.103274 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:54:55.150203 kernel: EDAC MC: Ver: 3.0.0 May 27 03:54:55.183828 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:54:55.205606 systemd-logind[1514]: New seat seat0. May 27 03:54:55.210785 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:54:55.218286 containerd[1541]: time="2025-05-27T03:54:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:54:55.219573 containerd[1541]: time="2025-05-27T03:54:55.219544476Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:54:55.240275 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 27 03:54:55.251638 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 03:54:55.256999 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 03:54:55.258605 extend-filesystems[1578]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 27 03:54:55.258605 extend-filesystems[1578]: old_desc_blocks = 1, new_desc_blocks = 10 May 27 03:54:55.258605 extend-filesystems[1578]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 27 03:54:55.261175 extend-filesystems[1508]: Resized filesystem in /dev/sda9 May 27 03:54:55.262713 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:54:55.263381 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:54:55.278349 coreos-metadata[1579]: May 27 03:54:55.278 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 27 03:54:55.287527 containerd[1541]: time="2025-05-27T03:54:55.287474830Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.74µs" May 27 03:54:55.287527 containerd[1541]: time="2025-05-27T03:54:55.287517170Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:54:55.287587 containerd[1541]: time="2025-05-27T03:54:55.287541260Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:54:55.287761 containerd[1541]: time="2025-05-27T03:54:55.287726730Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:54:55.287761 containerd[1541]: time="2025-05-27T03:54:55.287755510Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:54:55.287804 containerd[1541]: time="2025-05-27T03:54:55.287786030Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:54:55.287901 containerd[1541]: time="2025-05-27T03:54:55.287855810Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:54:55.290605 containerd[1541]: time="2025-05-27T03:54:55.290556611Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:54:55.290906 containerd[1541]: time="2025-05-27T03:54:55.290814311Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:54:55.290906 containerd[1541]: time="2025-05-27T03:54:55.290837311Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:54:55.290906 containerd[1541]: time="2025-05-27T03:54:55.290850461Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:54:55.290906 containerd[1541]: time="2025-05-27T03:54:55.290858321Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:54:55.292461 containerd[1541]: time="2025-05-27T03:54:55.292233942Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:54:55.292699 containerd[1541]: time="2025-05-27T03:54:55.292666642Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:54:55.292728 containerd[1541]: time="2025-05-27T03:54:55.292710022Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:54:55.292728 containerd[1541]: time="2025-05-27T03:54:55.292723762Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:54:55.296517 containerd[1541]: time="2025-05-27T03:54:55.296475554Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:54:55.297931 containerd[1541]: time="2025-05-27T03:54:55.297222934Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:54:55.297931 containerd[1541]: time="2025-05-27T03:54:55.297499155Z" level=info msg="metadata content store policy set" policy=shared May 27 03:54:55.317840 containerd[1541]: time="2025-05-27T03:54:55.317798235Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:54:55.317918 containerd[1541]: time="2025-05-27T03:54:55.317848855Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:54:55.317918 containerd[1541]: time="2025-05-27T03:54:55.317865015Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:54:55.317918 containerd[1541]: time="2025-05-27T03:54:55.317900165Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:54:55.317918 containerd[1541]: time="2025-05-27T03:54:55.317912955Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:54:55.318006 containerd[1541]: time="2025-05-27T03:54:55.317924375Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:54:55.318006 containerd[1541]: time="2025-05-27T03:54:55.317936195Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:54:55.318006 containerd[1541]: time="2025-05-27T03:54:55.317946365Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:54:55.318006 containerd[1541]: time="2025-05-27T03:54:55.317956905Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:54:55.318006 containerd[1541]: time="2025-05-27T03:54:55.317965435Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:54:55.318006 containerd[1541]: time="2025-05-27T03:54:55.317973325Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:54:55.318006 containerd[1541]: time="2025-05-27T03:54:55.317984395Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:54:55.318121 containerd[1541]: time="2025-05-27T03:54:55.318094285Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:54:55.318121 containerd[1541]: time="2025-05-27T03:54:55.318114925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:54:55.318157 containerd[1541]: time="2025-05-27T03:54:55.318129565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:54:55.318157 containerd[1541]: time="2025-05-27T03:54:55.318140705Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:54:55.318157 containerd[1541]: time="2025-05-27T03:54:55.318150655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:54:55.318217 containerd[1541]: time="2025-05-27T03:54:55.318162325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:54:55.318217 containerd[1541]: time="2025-05-27T03:54:55.318173955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:54:55.318217 containerd[1541]: time="2025-05-27T03:54:55.318183815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:54:55.318217 containerd[1541]: time="2025-05-27T03:54:55.318199985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:54:55.318292 containerd[1541]: time="2025-05-27T03:54:55.318225125Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:54:55.318292 containerd[1541]: time="2025-05-27T03:54:55.318236245Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:54:55.318326 containerd[1541]: time="2025-05-27T03:54:55.318293605Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:54:55.318326 containerd[1541]: time="2025-05-27T03:54:55.318307205Z" level=info msg="Start snapshots syncer" May 27 03:54:55.318361 containerd[1541]: time="2025-05-27T03:54:55.318331365Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:54:55.318583 containerd[1541]: time="2025-05-27T03:54:55.318536805Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:54:55.318689 containerd[1541]: time="2025-05-27T03:54:55.318587285Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:54:55.318740 containerd[1541]: time="2025-05-27T03:54:55.318710385Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:54:55.318854 containerd[1541]: time="2025-05-27T03:54:55.318822225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:54:55.318955 containerd[1541]: time="2025-05-27T03:54:55.318854535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:54:55.322998 containerd[1541]: time="2025-05-27T03:54:55.318866265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:54:55.322998 containerd[1541]: time="2025-05-27T03:54:55.322986457Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:54:55.323052 containerd[1541]: time="2025-05-27T03:54:55.323021447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:54:55.323052 containerd[1541]: time="2025-05-27T03:54:55.323035367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:54:55.323052 containerd[1541]: time="2025-05-27T03:54:55.323044997Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:54:55.323101 containerd[1541]: time="2025-05-27T03:54:55.323065937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:54:55.323101 containerd[1541]: time="2025-05-27T03:54:55.323078347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:54:55.323101 containerd[1541]: time="2025-05-27T03:54:55.323089027Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:54:55.325959 containerd[1541]: time="2025-05-27T03:54:55.325922759Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:54:55.326030 containerd[1541]: time="2025-05-27T03:54:55.325955269Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:54:55.326030 containerd[1541]: time="2025-05-27T03:54:55.326019419Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:54:55.326030 containerd[1541]: time="2025-05-27T03:54:55.326030379Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:54:55.326091 containerd[1541]: time="2025-05-27T03:54:55.326040809Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:54:55.326091 containerd[1541]: time="2025-05-27T03:54:55.326050839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:54:55.326091 containerd[1541]: time="2025-05-27T03:54:55.326060929Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:54:55.326091 containerd[1541]: time="2025-05-27T03:54:55.326080699Z" level=info msg="runtime interface created" May 27 03:54:55.326091 containerd[1541]: time="2025-05-27T03:54:55.326086479Z" level=info msg="created NRI interface" May 27 03:54:55.326178 containerd[1541]: time="2025-05-27T03:54:55.326099979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:54:55.326178 containerd[1541]: time="2025-05-27T03:54:55.326114289Z" level=info msg="Connect containerd service" May 27 03:54:55.326178 containerd[1541]: time="2025-05-27T03:54:55.326140029Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:54:55.328566 containerd[1541]: time="2025-05-27T03:54:55.328522620Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:54:55.329732 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:54:55.437101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:54:55.519188 systemd-networkd[1465]: eth0: DHCPv4 address 172.234.212.30/24, gateway 172.234.212.1 acquired from 23.40.196.242 May 27 03:54:55.519380 dbus-daemon[1505]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1465 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 27 03:54:55.525940 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:54:55.526060 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 27 03:54:55.530283 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. May 27 03:54:55.563000 systemd-logind[1514]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:54:55.566979 containerd[1541]: time="2025-05-27T03:54:55.565534648Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:54:55.566979 containerd[1541]: time="2025-05-27T03:54:55.565726719Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:54:55.566979 containerd[1541]: time="2025-05-27T03:54:55.565779659Z" level=info msg="Start subscribing containerd event" May 27 03:54:55.566979 containerd[1541]: time="2025-05-27T03:54:55.565811999Z" level=info msg="Start recovering state" May 27 03:54:55.566979 containerd[1541]: time="2025-05-27T03:54:55.566950729Z" level=info msg="Start event monitor" May 27 03:54:55.567121 containerd[1541]: time="2025-05-27T03:54:55.567108959Z" level=info msg="Start cni network conf syncer for default" May 27 03:54:55.567214 containerd[1541]: time="2025-05-27T03:54:55.567201249Z" level=info msg="Start streaming server" May 27 03:54:55.567347 containerd[1541]: time="2025-05-27T03:54:55.567249109Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:54:55.567347 containerd[1541]: time="2025-05-27T03:54:55.567287279Z" level=info msg="runtime interface starting up..." May 27 03:54:55.567347 containerd[1541]: time="2025-05-27T03:54:55.567293909Z" level=info msg="starting plugins..." May 27 03:54:55.567347 containerd[1541]: time="2025-05-27T03:54:55.567309929Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:54:55.567638 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:54:55.568334 containerd[1541]: time="2025-05-27T03:54:55.567560320Z" level=info msg="containerd successfully booted in 0.351390s" May 27 03:54:55.682982 systemd-timesyncd[1436]: Contacted time server 142.202.190.19:123 (1.flatcar.pool.ntp.org). May 27 03:54:55.683051 systemd-timesyncd[1436]: Initial clock synchronization to Tue 2025-05-27 03:54:55.916875 UTC. May 27 03:54:55.704365 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 27 03:54:55.709746 dbus-daemon[1505]: [system] Successfully activated service 'org.freedesktop.hostname1' May 27 03:54:55.710444 dbus-daemon[1505]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1624 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 27 03:54:55.758497 coreos-metadata[1503]: May 27 03:54:55.758 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 27 03:54:55.829362 systemd[1]: Starting polkit.service - Authorization Manager... May 27 03:54:55.872131 coreos-metadata[1503]: May 27 03:54:55.871 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 27 03:54:56.011323 polkitd[1632]: Started polkitd version 126 May 27 03:54:56.019858 polkitd[1632]: Loading rules from directory /etc/polkit-1/rules.d May 27 03:54:56.020360 polkitd[1632]: Loading rules from directory /run/polkit-1/rules.d May 27 03:54:56.020400 polkitd[1632]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 03:54:56.020622 polkitd[1632]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 27 03:54:56.020641 polkitd[1632]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 03:54:56.020675 polkitd[1632]: Loading rules from directory /usr/share/polkit-1/rules.d May 27 03:54:56.025681 polkitd[1632]: Finished loading, compiling and executing 2 rules May 27 03:54:56.026240 systemd[1]: Started polkit.service - Authorization Manager. May 27 03:54:56.026613 dbus-daemon[1505]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 27 03:54:56.028938 polkitd[1632]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 27 03:54:56.036369 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:54:56.047384 systemd-resolved[1412]: System hostname changed to '172-234-212-30'. May 27 03:54:56.047697 systemd-hostnamed[1624]: Hostname set to <172-234-212-30> (transient) May 27 03:54:56.063110 coreos-metadata[1503]: May 27 03:54:56.062 INFO Fetch successful May 27 03:54:56.063110 coreos-metadata[1503]: May 27 03:54:56.063 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 27 03:54:56.089761 tar[1533]: linux-amd64/README.md May 27 03:54:56.107010 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:54:56.125259 sshd_keygen[1545]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:54:56.148798 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:54:56.152314 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:54:56.167894 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:54:56.168201 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:54:56.172806 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:54:56.191362 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:54:56.194418 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:54:56.198185 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:54:56.198945 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:54:56.288164 coreos-metadata[1579]: May 27 03:54:56.288 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 27 03:54:56.331288 coreos-metadata[1503]: May 27 03:54:56.331 INFO Fetch successful May 27 03:54:56.379377 coreos-metadata[1579]: May 27 03:54:56.379 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 27 03:54:56.404214 systemd-networkd[1465]: eth0: Gained IPv6LL May 27 03:54:56.408147 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:54:56.410434 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:54:56.414396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:54:56.418124 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:54:56.425980 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 03:54:56.429658 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:54:56.440940 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:54:56.516678 coreos-metadata[1579]: May 27 03:54:56.516 INFO Fetch successful May 27 03:54:56.540352 update-ssh-keys[1695]: Updated "/home/core/.ssh/authorized_keys" May 27 03:54:56.541354 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 03:54:56.545077 systemd[1]: Finished sshkeys.service. May 27 03:54:57.266424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:54:57.267509 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:54:57.268829 systemd[1]: Startup finished in 2.794s (kernel) + 7.386s (initrd) + 5.289s (userspace) = 15.469s. May 27 03:54:57.313241 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:54:57.787397 kubelet[1704]: E0527 03:54:57.787116 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:54:57.791412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:54:57.791636 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:54:57.792133 systemd[1]: kubelet.service: Consumed 875ms CPU time, 262.5M memory peak. May 27 03:54:58.665522 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:54:58.667012 systemd[1]: Started sshd@0-172.234.212.30:22-139.178.68.195:60280.service - OpenSSH per-connection server daemon (139.178.68.195:60280). May 27 03:54:59.024978 sshd[1716]: Accepted publickey for core from 139.178.68.195 port 60280 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:54:59.027186 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:54:59.037838 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:54:59.039308 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:54:59.049990 systemd-logind[1514]: New session 1 of user core. May 27 03:54:59.064392 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:54:59.068278 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:54:59.088613 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:54:59.092141 systemd-logind[1514]: New session c1 of user core. May 27 03:54:59.260910 systemd[1720]: Queued start job for default target default.target. May 27 03:54:59.273213 systemd[1720]: Created slice app.slice - User Application Slice. May 27 03:54:59.273244 systemd[1720]: Reached target paths.target - Paths. May 27 03:54:59.273403 systemd[1720]: Reached target timers.target - Timers. May 27 03:54:59.277178 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:54:59.287304 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:54:59.287459 systemd[1720]: Reached target sockets.target - Sockets. May 27 03:54:59.287508 systemd[1720]: Reached target basic.target - Basic System. May 27 03:54:59.287550 systemd[1720]: Reached target default.target - Main User Target. May 27 03:54:59.287587 systemd[1720]: Startup finished in 185ms. May 27 03:54:59.287767 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:54:59.296042 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:54:59.570845 systemd[1]: Started sshd@1-172.234.212.30:22-139.178.68.195:60290.service - OpenSSH per-connection server daemon (139.178.68.195:60290). May 27 03:54:59.929377 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 60290 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:54:59.931264 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:54:59.938684 systemd-logind[1514]: New session 2 of user core. May 27 03:54:59.946036 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:55:00.187443 sshd[1733]: Connection closed by 139.178.68.195 port 60290 May 27 03:55:00.188609 sshd-session[1731]: pam_unix(sshd:session): session closed for user core May 27 03:55:00.192790 systemd[1]: sshd@1-172.234.212.30:22-139.178.68.195:60290.service: Deactivated successfully. May 27 03:55:00.195161 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:55:00.197233 systemd-logind[1514]: Session 2 logged out. Waiting for processes to exit. May 27 03:55:00.198876 systemd-logind[1514]: Removed session 2. May 27 03:55:00.252073 systemd[1]: Started sshd@2-172.234.212.30:22-139.178.68.195:60302.service - OpenSSH per-connection server daemon (139.178.68.195:60302). May 27 03:55:00.594270 sshd[1739]: Accepted publickey for core from 139.178.68.195 port 60302 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:55:00.595930 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:55:00.601576 systemd-logind[1514]: New session 3 of user core. May 27 03:55:00.607007 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:55:00.840862 sshd[1741]: Connection closed by 139.178.68.195 port 60302 May 27 03:55:00.841776 sshd-session[1739]: pam_unix(sshd:session): session closed for user core May 27 03:55:00.847590 systemd[1]: sshd@2-172.234.212.30:22-139.178.68.195:60302.service: Deactivated successfully. May 27 03:55:00.850743 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:55:00.851716 systemd-logind[1514]: Session 3 logged out. Waiting for processes to exit. May 27 03:55:00.853098 systemd-logind[1514]: Removed session 3. May 27 03:55:00.908720 systemd[1]: Started sshd@3-172.234.212.30:22-139.178.68.195:60306.service - OpenSSH per-connection server daemon (139.178.68.195:60306). May 27 03:55:01.277025 sshd[1747]: Accepted publickey for core from 139.178.68.195 port 60306 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:55:01.279039 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:55:01.283913 systemd-logind[1514]: New session 4 of user core. May 27 03:55:01.293240 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:55:01.533129 sshd[1749]: Connection closed by 139.178.68.195 port 60306 May 27 03:55:01.533953 sshd-session[1747]: pam_unix(sshd:session): session closed for user core May 27 03:55:01.539480 systemd-logind[1514]: Session 4 logged out. Waiting for processes to exit. May 27 03:55:01.540359 systemd[1]: sshd@3-172.234.212.30:22-139.178.68.195:60306.service: Deactivated successfully. May 27 03:55:01.543330 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:55:01.545301 systemd-logind[1514]: Removed session 4. May 27 03:55:01.596247 systemd[1]: Started sshd@4-172.234.212.30:22-139.178.68.195:60310.service - OpenSSH per-connection server daemon (139.178.68.195:60310). May 27 03:55:01.944390 sshd[1755]: Accepted publickey for core from 139.178.68.195 port 60310 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:55:01.945931 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:55:01.950119 systemd-logind[1514]: New session 5 of user core. May 27 03:55:01.965024 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:55:02.156275 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:55:02.156623 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:55:02.170196 sudo[1758]: pam_unix(sudo:session): session closed for user root May 27 03:55:02.222868 sshd[1757]: Connection closed by 139.178.68.195 port 60310 May 27 03:55:02.223306 sshd-session[1755]: pam_unix(sshd:session): session closed for user core May 27 03:55:02.231801 systemd[1]: sshd@4-172.234.212.30:22-139.178.68.195:60310.service: Deactivated successfully. May 27 03:55:02.234114 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:55:02.237079 systemd-logind[1514]: Session 5 logged out. Waiting for processes to exit. May 27 03:55:02.238362 systemd-logind[1514]: Removed session 5. May 27 03:55:02.283070 systemd[1]: Started sshd@5-172.234.212.30:22-139.178.68.195:60316.service - OpenSSH per-connection server daemon (139.178.68.195:60316). May 27 03:55:02.623140 sshd[1764]: Accepted publickey for core from 139.178.68.195 port 60316 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:55:02.625064 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:55:02.631012 systemd-logind[1514]: New session 6 of user core. May 27 03:55:02.641036 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:55:02.818618 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:55:02.818938 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:55:02.823369 sudo[1768]: pam_unix(sudo:session): session closed for user root May 27 03:55:02.828717 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:55:02.829054 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:55:02.838611 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:55:02.878176 augenrules[1790]: No rules May 27 03:55:02.879674 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:55:02.880017 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:55:02.881375 sudo[1767]: pam_unix(sudo:session): session closed for user root May 27 03:55:02.931382 sshd[1766]: Connection closed by 139.178.68.195 port 60316 May 27 03:55:02.931979 sshd-session[1764]: pam_unix(sshd:session): session closed for user core May 27 03:55:02.936057 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. May 27 03:55:02.936672 systemd[1]: sshd@5-172.234.212.30:22-139.178.68.195:60316.service: Deactivated successfully. May 27 03:55:02.938596 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:55:02.940413 systemd-logind[1514]: Removed session 6. May 27 03:55:03.002170 systemd[1]: Started sshd@6-172.234.212.30:22-139.178.68.195:60326.service - OpenSSH per-connection server daemon (139.178.68.195:60326). May 27 03:55:03.364031 sshd[1799]: Accepted publickey for core from 139.178.68.195 port 60326 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:55:03.365943 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:55:03.389664 systemd-logind[1514]: New session 7 of user core. May 27 03:55:03.398310 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:55:03.570745 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:55:03.571537 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:55:03.869579 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:55:03.883204 (dockerd)[1819]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:55:04.092016 dockerd[1819]: time="2025-05-27T03:55:04.091941308Z" level=info msg="Starting up" May 27 03:55:04.095041 dockerd[1819]: time="2025-05-27T03:55:04.094529640Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:55:04.135807 systemd[1]: var-lib-docker-metacopy\x2dcheck3891905808-merged.mount: Deactivated successfully. May 27 03:55:04.155709 dockerd[1819]: time="2025-05-27T03:55:04.155670292Z" level=info msg="Loading containers: start." May 27 03:55:04.166921 kernel: Initializing XFRM netlink socket May 27 03:55:04.410338 systemd-networkd[1465]: docker0: Link UP May 27 03:55:04.412848 dockerd[1819]: time="2025-05-27T03:55:04.412800606Z" level=info msg="Loading containers: done." May 27 03:55:04.429308 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck32416315-merged.mount: Deactivated successfully. May 27 03:55:04.432484 dockerd[1819]: time="2025-05-27T03:55:04.432432118Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:55:04.432553 dockerd[1819]: time="2025-05-27T03:55:04.432514952Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:55:04.432656 dockerd[1819]: time="2025-05-27T03:55:04.432620038Z" level=info msg="Initializing buildkit" May 27 03:55:04.454517 dockerd[1819]: time="2025-05-27T03:55:04.454452365Z" level=info msg="Completed buildkit initialization" May 27 03:55:04.463698 dockerd[1819]: time="2025-05-27T03:55:04.463660306Z" level=info msg="Daemon has completed initialization" May 27 03:55:04.463900 dockerd[1819]: time="2025-05-27T03:55:04.463841823Z" level=info msg="API listen on /run/docker.sock" May 27 03:55:04.464045 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:55:04.989969 containerd[1541]: time="2025-05-27T03:55:04.989906574Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 03:55:05.752367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234397215.mount: Deactivated successfully. May 27 03:55:07.073572 containerd[1541]: time="2025-05-27T03:55:07.073514083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:07.074496 containerd[1541]: time="2025-05-27T03:55:07.074373149Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 27 03:55:07.075370 containerd[1541]: time="2025-05-27T03:55:07.075335740Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:07.077899 containerd[1541]: time="2025-05-27T03:55:07.077538819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:07.078559 containerd[1541]: time="2025-05-27T03:55:07.078429255Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 2.088483456s" May 27 03:55:07.078559 containerd[1541]: time="2025-05-27T03:55:07.078457686Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 27 03:55:07.079432 containerd[1541]: time="2025-05-27T03:55:07.079412162Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 03:55:07.918745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:55:07.921195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:55:08.116031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:55:08.131400 (kubelet)[2084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:55:08.181066 kubelet[2084]: E0527 03:55:08.180839 2084 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:55:08.186649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:55:08.186857 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:55:08.187360 systemd[1]: kubelet.service: Consumed 195ms CPU time, 108.3M memory peak. May 27 03:55:08.849047 containerd[1541]: time="2025-05-27T03:55:08.848983443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:08.850051 containerd[1541]: time="2025-05-27T03:55:08.849846913Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 27 03:55:08.850766 containerd[1541]: time="2025-05-27T03:55:08.850731304Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:08.852716 containerd[1541]: time="2025-05-27T03:55:08.852681751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:08.853549 containerd[1541]: time="2025-05-27T03:55:08.853515326Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.774078726s" May 27 03:55:08.853549 containerd[1541]: time="2025-05-27T03:55:08.853548631Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 27 03:55:08.854390 containerd[1541]: time="2025-05-27T03:55:08.854372024Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 03:55:10.221102 containerd[1541]: time="2025-05-27T03:55:10.221042846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:10.221971 containerd[1541]: time="2025-05-27T03:55:10.221805523Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 27 03:55:10.222560 containerd[1541]: time="2025-05-27T03:55:10.222535873Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:10.224712 containerd[1541]: time="2025-05-27T03:55:10.224687890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:10.225839 containerd[1541]: time="2025-05-27T03:55:10.225815184Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.371365365s" May 27 03:55:10.226117 containerd[1541]: time="2025-05-27T03:55:10.225916047Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 27 03:55:10.227004 containerd[1541]: time="2025-05-27T03:55:10.226979484Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 03:55:12.536219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3565588255.mount: Deactivated successfully. May 27 03:55:12.913243 containerd[1541]: time="2025-05-27T03:55:12.912973031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:12.913825 containerd[1541]: time="2025-05-27T03:55:12.913801203Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 27 03:55:12.914696 containerd[1541]: time="2025-05-27T03:55:12.914636378Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:12.915900 containerd[1541]: time="2025-05-27T03:55:12.915853619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:12.918142 containerd[1541]: time="2025-05-27T03:55:12.916379068Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 2.689371387s" May 27 03:55:12.918142 containerd[1541]: time="2025-05-27T03:55:12.916411098Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 27 03:55:12.920311 containerd[1541]: time="2025-05-27T03:55:12.920287414Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 03:55:13.625554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644848202.mount: Deactivated successfully. May 27 03:55:14.306057 containerd[1541]: time="2025-05-27T03:55:14.305975018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:14.307326 containerd[1541]: time="2025-05-27T03:55:14.307215752Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 27 03:55:14.308141 containerd[1541]: time="2025-05-27T03:55:14.308108516Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:14.310447 containerd[1541]: time="2025-05-27T03:55:14.310393154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:14.311796 containerd[1541]: time="2025-05-27T03:55:14.311444247Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.391057539s" May 27 03:55:14.311796 containerd[1541]: time="2025-05-27T03:55:14.311477415Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 03:55:14.312170 containerd[1541]: time="2025-05-27T03:55:14.312148455Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:55:14.939236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2939015481.mount: Deactivated successfully. May 27 03:55:14.944410 containerd[1541]: time="2025-05-27T03:55:14.944378099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:55:14.945016 containerd[1541]: time="2025-05-27T03:55:14.944996541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 03:55:14.946764 containerd[1541]: time="2025-05-27T03:55:14.945584693Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:55:14.947269 containerd[1541]: time="2025-05-27T03:55:14.947229342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:55:14.952631 containerd[1541]: time="2025-05-27T03:55:14.952581932Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 640.352915ms" May 27 03:55:14.954081 containerd[1541]: time="2025-05-27T03:55:14.954048241Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:55:14.954901 containerd[1541]: time="2025-05-27T03:55:14.954833213Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 03:55:15.677141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount81235795.mount: Deactivated successfully. May 27 03:55:17.182579 containerd[1541]: time="2025-05-27T03:55:17.181623293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:17.183586 containerd[1541]: time="2025-05-27T03:55:17.183494453Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 27 03:55:17.184515 containerd[1541]: time="2025-05-27T03:55:17.184425089Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:17.187283 containerd[1541]: time="2025-05-27T03:55:17.187253524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:17.188322 containerd[1541]: time="2025-05-27T03:55:17.188270452Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.233412981s" May 27 03:55:17.188598 containerd[1541]: time="2025-05-27T03:55:17.188296819Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 03:55:18.418691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 03:55:18.422004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:55:18.596991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:55:18.606583 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:55:18.648238 kubelet[2245]: E0527 03:55:18.648201 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:55:18.652055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:55:18.652240 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:55:18.652635 systemd[1]: kubelet.service: Consumed 180ms CPU time, 110.2M memory peak. May 27 03:55:19.313965 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:55:19.314130 systemd[1]: kubelet.service: Consumed 180ms CPU time, 110.2M memory peak. May 27 03:55:19.316318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:55:19.344510 systemd[1]: Reload requested from client PID 2259 ('systemctl') (unit session-7.scope)... May 27 03:55:19.344596 systemd[1]: Reloading... May 27 03:55:19.500920 zram_generator::config[2303]: No configuration found. May 27 03:55:19.599476 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:55:19.705248 systemd[1]: Reloading finished in 360 ms. May 27 03:55:19.767328 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:55:19.767436 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:55:19.767988 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:55:19.768039 systemd[1]: kubelet.service: Consumed 150ms CPU time, 98.3M memory peak. May 27 03:55:19.769865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:55:19.942330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:55:19.952159 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:55:19.994229 kubelet[2357]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:55:19.994229 kubelet[2357]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:55:19.994229 kubelet[2357]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:55:19.994486 kubelet[2357]: I0527 03:55:19.994280 2357 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:55:20.289385 kubelet[2357]: I0527 03:55:20.289278 2357 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:55:20.289385 kubelet[2357]: I0527 03:55:20.289304 2357 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:55:20.289758 kubelet[2357]: I0527 03:55:20.289731 2357 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:55:20.316515 kubelet[2357]: E0527 03:55:20.316482 2357 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.234.212.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.212.30:6443: connect: connection refused" logger="UnhandledError" May 27 03:55:20.321923 kubelet[2357]: I0527 03:55:20.321895 2357 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:55:20.333016 kubelet[2357]: I0527 03:55:20.332031 2357 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:55:20.336253 kubelet[2357]: I0527 03:55:20.336224 2357 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:55:20.337509 kubelet[2357]: I0527 03:55:20.337461 2357 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:55:20.337666 kubelet[2357]: I0527 03:55:20.337494 2357 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-212-30","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:55:20.337666 kubelet[2357]: I0527 03:55:20.337661 2357 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:55:20.337795 kubelet[2357]: I0527 03:55:20.337671 2357 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:55:20.337795 kubelet[2357]: I0527 03:55:20.337773 2357 state_mem.go:36] "Initialized new in-memory state store" May 27 03:55:20.340897 kubelet[2357]: I0527 03:55:20.340856 2357 kubelet.go:446] "Attempting to sync node with API server" May 27 03:55:20.341273 kubelet[2357]: I0527 03:55:20.341209 2357 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:55:20.341273 kubelet[2357]: I0527 03:55:20.341236 2357 kubelet.go:352] "Adding apiserver pod source" May 27 03:55:20.341273 kubelet[2357]: I0527 03:55:20.341249 2357 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:55:20.345454 kubelet[2357]: W0527 03:55:20.345251 2357 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.212.30:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-212-30&limit=500&resourceVersion=0": dial tcp 172.234.212.30:6443: connect: connection refused May 27 03:55:20.345552 kubelet[2357]: E0527 03:55:20.345534 2357 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.212.30:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-212-30&limit=500&resourceVersion=0\": dial tcp 172.234.212.30:6443: connect: connection refused" logger="UnhandledError" May 27 03:55:20.345689 kubelet[2357]: I0527 03:55:20.345675 2357 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:55:20.346292 kubelet[2357]: I0527 03:55:20.346277 2357 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:55:20.347135 kubelet[2357]: W0527 03:55:20.346931 2357 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:55:20.349943 kubelet[2357]: I0527 03:55:20.349753 2357 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:55:20.349943 kubelet[2357]: I0527 03:55:20.349784 2357 server.go:1287] "Started kubelet" May 27 03:55:20.353000 kubelet[2357]: W0527 03:55:20.352952 2357 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.212.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.212.30:6443: connect: connection refused May 27 03:55:20.353067 kubelet[2357]: E0527 03:55:20.353008 2357 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.212.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.212.30:6443: connect: connection refused" logger="UnhandledError" May 27 03:55:20.353124 kubelet[2357]: I0527 03:55:20.353090 2357 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:55:20.353336 kubelet[2357]: I0527 03:55:20.353294 2357 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:55:20.353658 kubelet[2357]: I0527 03:55:20.353643 2357 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:55:20.353869 kubelet[2357]: I0527 03:55:20.353835 2357 server.go:479] "Adding debug handlers to kubelet server" May 27 03:55:20.354896 kubelet[2357]: E0527 03:55:20.353826 2357 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.212.30:6443/api/v1/namespaces/default/events\": dial tcp 172.234.212.30:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-212-30.1843461145d3921b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-212-30,UID:172-234-212-30,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-212-30,},FirstTimestamp:2025-05-27 03:55:20.349766171 +0000 UTC m=+0.393013366,LastTimestamp:2025-05-27 03:55:20.349766171 +0000 UTC m=+0.393013366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-212-30,}" May 27 03:55:20.356367 kubelet[2357]: I0527 03:55:20.355790 2357 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:55:20.357136 kubelet[2357]: I0527 03:55:20.357120 2357 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:55:20.360446 kubelet[2357]: E0527 03:55:20.360431 2357 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-212-30\" not found" May 27 03:55:20.361283 kubelet[2357]: I0527 03:55:20.360573 2357 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:55:20.361283 kubelet[2357]: I0527 03:55:20.360737 2357 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:55:20.361283 kubelet[2357]: I0527 03:55:20.360778 2357 reconciler.go:26] "Reconciler: start to sync state" May 27 03:55:20.361707 kubelet[2357]: W0527 03:55:20.361679 2357 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.212.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.212.30:6443: connect: connection refused May 27 03:55:20.362070 kubelet[2357]: E0527 03:55:20.362053 2357 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.212.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.212.30:6443: connect: connection refused" logger="UnhandledError" May 27 03:55:20.362661 kubelet[2357]: E0527 03:55:20.362410 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.212.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-212-30?timeout=10s\": dial tcp 172.234.212.30:6443: connect: connection refused" interval="200ms" May 27 03:55:20.362902 kubelet[2357]: E0527 03:55:20.362847 2357 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:55:20.363055 kubelet[2357]: I0527 03:55:20.363026 2357 factory.go:221] Registration of the systemd container factory successfully May 27 03:55:20.363110 kubelet[2357]: I0527 03:55:20.363092 2357 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:55:20.366035 kubelet[2357]: I0527 03:55:20.364685 2357 factory.go:221] Registration of the containerd container factory successfully May 27 03:55:20.375481 kubelet[2357]: I0527 03:55:20.375437 2357 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:55:20.377480 kubelet[2357]: I0527 03:55:20.377409 2357 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:55:20.377480 kubelet[2357]: I0527 03:55:20.377430 2357 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:55:20.377480 kubelet[2357]: I0527 03:55:20.377445 2357 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:55:20.377480 kubelet[2357]: I0527 03:55:20.377451 2357 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:55:20.377592 kubelet[2357]: E0527 03:55:20.377494 2357 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:55:20.383280 kubelet[2357]: W0527 03:55:20.383237 2357 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.212.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.212.30:6443: connect: connection refused May 27 03:55:20.383329 kubelet[2357]: E0527 03:55:20.383283 2357 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.212.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.212.30:6443: connect: connection refused" logger="UnhandledError" May 27 03:55:20.389235 kubelet[2357]: I0527 03:55:20.389209 2357 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:55:20.389235 kubelet[2357]: I0527 03:55:20.389225 2357 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:55:20.389235 kubelet[2357]: I0527 03:55:20.389240 2357 state_mem.go:36] "Initialized new in-memory state store" May 27 03:55:20.390598 kubelet[2357]: I0527 03:55:20.390580 2357 policy_none.go:49] "None policy: Start" May 27 03:55:20.390598 kubelet[2357]: I0527 03:55:20.390597 2357 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:55:20.390661 kubelet[2357]: I0527 03:55:20.390611 2357 state_mem.go:35] "Initializing new in-memory state store" May 27 03:55:20.396106 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:55:20.409550 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:55:20.413638 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:55:20.426859 kubelet[2357]: I0527 03:55:20.426833 2357 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:55:20.427058 kubelet[2357]: I0527 03:55:20.427035 2357 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:55:20.427104 kubelet[2357]: I0527 03:55:20.427054 2357 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:55:20.427943 kubelet[2357]: I0527 03:55:20.427892 2357 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:55:20.429746 kubelet[2357]: E0527 03:55:20.429728 2357 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:55:20.430306 kubelet[2357]: E0527 03:55:20.430260 2357 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-212-30\" not found" May 27 03:55:20.488229 systemd[1]: Created slice kubepods-burstable-pod1173a5928226451147dd4965136524fe.slice - libcontainer container kubepods-burstable-pod1173a5928226451147dd4965136524fe.slice. May 27 03:55:20.505397 kubelet[2357]: E0527 03:55:20.505358 2357 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-212-30\" not found" node="172-234-212-30" May 27 03:55:20.509210 systemd[1]: Created slice kubepods-burstable-podfbd567f844f73220e6f3f8e9272201ec.slice - libcontainer container kubepods-burstable-podfbd567f844f73220e6f3f8e9272201ec.slice. May 27 03:55:20.512333 kubelet[2357]: E0527 03:55:20.512301 2357 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-212-30\" not found" node="172-234-212-30" May 27 03:55:20.514919 systemd[1]: Created slice kubepods-burstable-poddbb5adff9f5c11e328778536c4c55d3e.slice - libcontainer container kubepods-burstable-poddbb5adff9f5c11e328778536c4c55d3e.slice. May 27 03:55:20.516809 kubelet[2357]: E0527 03:55:20.516645 2357 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-212-30\" not found" node="172-234-212-30" May 27 03:55:20.529237 kubelet[2357]: I0527 03:55:20.528978 2357 kubelet_node_status.go:75] "Attempting to register node" node="172-234-212-30" May 27 03:55:20.529617 kubelet[2357]: E0527 03:55:20.529584 2357 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.212.30:6443/api/v1/nodes\": dial tcp 172.234.212.30:6443: connect: connection refused" node="172-234-212-30" May 27 03:55:20.562090 kubelet[2357]: I0527 03:55:20.561936 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbd567f844f73220e6f3f8e9272201ec-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-212-30\" (UID: \"fbd567f844f73220e6f3f8e9272201ec\") " pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:20.562090 kubelet[2357]: I0527 03:55:20.561968 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dbb5adff9f5c11e328778536c4c55d3e-kubeconfig\") pod \"kube-scheduler-172-234-212-30\" (UID: \"dbb5adff9f5c11e328778536c4c55d3e\") " pod="kube-system/kube-scheduler-172-234-212-30" May 27 03:55:20.562090 kubelet[2357]: I0527 03:55:20.561987 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1173a5928226451147dd4965136524fe-ca-certs\") pod \"kube-apiserver-172-234-212-30\" (UID: \"1173a5928226451147dd4965136524fe\") " pod="kube-system/kube-apiserver-172-234-212-30" May 27 03:55:20.562090 kubelet[2357]: I0527 03:55:20.562002 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1173a5928226451147dd4965136524fe-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-212-30\" (UID: \"1173a5928226451147dd4965136524fe\") " pod="kube-system/kube-apiserver-172-234-212-30" May 27 03:55:20.562090 kubelet[2357]: I0527 03:55:20.562020 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbd567f844f73220e6f3f8e9272201ec-ca-certs\") pod \"kube-controller-manager-172-234-212-30\" (UID: \"fbd567f844f73220e6f3f8e9272201ec\") " pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:20.562261 kubelet[2357]: I0527 03:55:20.562040 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fbd567f844f73220e6f3f8e9272201ec-kubeconfig\") pod \"kube-controller-manager-172-234-212-30\" (UID: \"fbd567f844f73220e6f3f8e9272201ec\") " pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:20.562261 kubelet[2357]: I0527 03:55:20.562056 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1173a5928226451147dd4965136524fe-k8s-certs\") pod \"kube-apiserver-172-234-212-30\" (UID: \"1173a5928226451147dd4965136524fe\") " pod="kube-system/kube-apiserver-172-234-212-30" May 27 03:55:20.562991 kubelet[2357]: I0527 03:55:20.562959 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fbd567f844f73220e6f3f8e9272201ec-flexvolume-dir\") pod \"kube-controller-manager-172-234-212-30\" (UID: \"fbd567f844f73220e6f3f8e9272201ec\") " pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:20.562991 kubelet[2357]: I0527 03:55:20.562986 2357 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbd567f844f73220e6f3f8e9272201ec-k8s-certs\") pod \"kube-controller-manager-172-234-212-30\" (UID: \"fbd567f844f73220e6f3f8e9272201ec\") " pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:20.563223 kubelet[2357]: E0527 03:55:20.563184 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.212.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-212-30?timeout=10s\": dial tcp 172.234.212.30:6443: connect: connection refused" interval="400ms" May 27 03:55:20.731567 kubelet[2357]: I0527 03:55:20.731544 2357 kubelet_node_status.go:75] "Attempting to register node" node="172-234-212-30" May 27 03:55:20.731856 kubelet[2357]: E0527 03:55:20.731829 2357 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.212.30:6443/api/v1/nodes\": dial tcp 172.234.212.30:6443: connect: connection refused" node="172-234-212-30" May 27 03:55:20.806688 kubelet[2357]: E0527 03:55:20.806657 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:20.807233 containerd[1541]: time="2025-05-27T03:55:20.807197515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-212-30,Uid:1173a5928226451147dd4965136524fe,Namespace:kube-system,Attempt:0,}" May 27 03:55:20.813047 kubelet[2357]: E0527 03:55:20.812962 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:20.813290 containerd[1541]: time="2025-05-27T03:55:20.813244067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-212-30,Uid:fbd567f844f73220e6f3f8e9272201ec,Namespace:kube-system,Attempt:0,}" May 27 03:55:20.821851 kubelet[2357]: E0527 03:55:20.820993 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:20.833372 containerd[1541]: time="2025-05-27T03:55:20.833342193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-212-30,Uid:dbb5adff9f5c11e328778536c4c55d3e,Namespace:kube-system,Attempt:0,}" May 27 03:55:20.841267 containerd[1541]: time="2025-05-27T03:55:20.841239159Z" level=info msg="connecting to shim 67e99cec9bff0e62487bd0f66667cde6a5571ea1892f81492d7c09a1c8ea25f4" address="unix:///run/containerd/s/139c459223f0cac1e61727276e71a4843887d1d54eb866273e3fc71dd0caff58" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:20.842973 containerd[1541]: time="2025-05-27T03:55:20.842951310Z" level=info msg="connecting to shim d539859c0ea2963764f2d4931387d6f6a7a0cc7ec46a66f316b5d7a1ba366283" address="unix:///run/containerd/s/f7dd501f8749fcdffebe90a39c3844b6ecb1631dc52a880481ce99aef1f8f55e" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:20.862896 containerd[1541]: time="2025-05-27T03:55:20.862829145Z" level=info msg="connecting to shim bc2a837d6d1cfd7ed88afad23749ff9ec043d165791edfbf49d9f2da165645e4" address="unix:///run/containerd/s/12a85d788cc659a58326e538636453a617e60a5fafceb0046c29ac3e1469986e" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:20.887043 systemd[1]: Started cri-containerd-67e99cec9bff0e62487bd0f66667cde6a5571ea1892f81492d7c09a1c8ea25f4.scope - libcontainer container 67e99cec9bff0e62487bd0f66667cde6a5571ea1892f81492d7c09a1c8ea25f4. May 27 03:55:20.891794 systemd[1]: Started cri-containerd-d539859c0ea2963764f2d4931387d6f6a7a0cc7ec46a66f316b5d7a1ba366283.scope - libcontainer container d539859c0ea2963764f2d4931387d6f6a7a0cc7ec46a66f316b5d7a1ba366283. May 27 03:55:20.909790 systemd[1]: Started cri-containerd-bc2a837d6d1cfd7ed88afad23749ff9ec043d165791edfbf49d9f2da165645e4.scope - libcontainer container bc2a837d6d1cfd7ed88afad23749ff9ec043d165791edfbf49d9f2da165645e4. May 27 03:55:20.963894 kubelet[2357]: E0527 03:55:20.963528 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.212.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-212-30?timeout=10s\": dial tcp 172.234.212.30:6443: connect: connection refused" interval="800ms" May 27 03:55:20.973965 containerd[1541]: time="2025-05-27T03:55:20.973936381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-212-30,Uid:fbd567f844f73220e6f3f8e9272201ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"d539859c0ea2963764f2d4931387d6f6a7a0cc7ec46a66f316b5d7a1ba366283\"" May 27 03:55:20.976416 kubelet[2357]: E0527 03:55:20.976394 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:20.980948 containerd[1541]: time="2025-05-27T03:55:20.980925950Z" level=info msg="CreateContainer within sandbox \"d539859c0ea2963764f2d4931387d6f6a7a0cc7ec46a66f316b5d7a1ba366283\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:55:20.989094 containerd[1541]: time="2025-05-27T03:55:20.989034097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-212-30,Uid:1173a5928226451147dd4965136524fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"67e99cec9bff0e62487bd0f66667cde6a5571ea1892f81492d7c09a1c8ea25f4\"" May 27 03:55:20.992740 kubelet[2357]: E0527 03:55:20.992634 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:20.996901 containerd[1541]: time="2025-05-27T03:55:20.996455910Z" level=info msg="CreateContainer within sandbox \"67e99cec9bff0e62487bd0f66667cde6a5571ea1892f81492d7c09a1c8ea25f4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:55:20.998234 containerd[1541]: time="2025-05-27T03:55:20.997676337Z" level=info msg="Container 42a302be6afd1397aa34a2163afe4b298ea047e1bbeab2ec908179b99b094b44: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:21.005420 containerd[1541]: time="2025-05-27T03:55:21.005400008Z" level=info msg="CreateContainer within sandbox \"d539859c0ea2963764f2d4931387d6f6a7a0cc7ec46a66f316b5d7a1ba366283\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"42a302be6afd1397aa34a2163afe4b298ea047e1bbeab2ec908179b99b094b44\"" May 27 03:55:21.005942 containerd[1541]: time="2025-05-27T03:55:21.005923381Z" level=info msg="StartContainer for \"42a302be6afd1397aa34a2163afe4b298ea047e1bbeab2ec908179b99b094b44\"" May 27 03:55:21.006841 containerd[1541]: time="2025-05-27T03:55:21.006819521Z" level=info msg="connecting to shim 42a302be6afd1397aa34a2163afe4b298ea047e1bbeab2ec908179b99b094b44" address="unix:///run/containerd/s/f7dd501f8749fcdffebe90a39c3844b6ecb1631dc52a880481ce99aef1f8f55e" protocol=ttrpc version=3 May 27 03:55:21.011326 containerd[1541]: time="2025-05-27T03:55:21.010320065Z" level=info msg="Container b4088219bc30bb1e32bae144f5721209c2ac6fe4fbd2422bd4bcd2237155c77d: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:21.014630 containerd[1541]: time="2025-05-27T03:55:21.014610848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-212-30,Uid:dbb5adff9f5c11e328778536c4c55d3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc2a837d6d1cfd7ed88afad23749ff9ec043d165791edfbf49d9f2da165645e4\"" May 27 03:55:21.015962 containerd[1541]: time="2025-05-27T03:55:21.015719559Z" level=info msg="CreateContainer within sandbox \"67e99cec9bff0e62487bd0f66667cde6a5571ea1892f81492d7c09a1c8ea25f4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b4088219bc30bb1e32bae144f5721209c2ac6fe4fbd2422bd4bcd2237155c77d\"" May 27 03:55:21.016527 kubelet[2357]: E0527 03:55:21.016457 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:21.017068 containerd[1541]: time="2025-05-27T03:55:21.017049830Z" level=info msg="StartContainer for \"b4088219bc30bb1e32bae144f5721209c2ac6fe4fbd2422bd4bcd2237155c77d\"" May 27 03:55:21.019721 containerd[1541]: time="2025-05-27T03:55:21.019658358Z" level=info msg="connecting to shim b4088219bc30bb1e32bae144f5721209c2ac6fe4fbd2422bd4bcd2237155c77d" address="unix:///run/containerd/s/139c459223f0cac1e61727276e71a4843887d1d54eb866273e3fc71dd0caff58" protocol=ttrpc version=3 May 27 03:55:21.021151 containerd[1541]: time="2025-05-27T03:55:21.021111976Z" level=info msg="CreateContainer within sandbox \"bc2a837d6d1cfd7ed88afad23749ff9ec043d165791edfbf49d9f2da165645e4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:55:21.026397 containerd[1541]: time="2025-05-27T03:55:21.026224483Z" level=info msg="Container 2c29a3e75a0d502a75fcf074debc44f1390c39c6448755b668d9170242cb4694: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:21.035248 systemd[1]: Started cri-containerd-42a302be6afd1397aa34a2163afe4b298ea047e1bbeab2ec908179b99b094b44.scope - libcontainer container 42a302be6afd1397aa34a2163afe4b298ea047e1bbeab2ec908179b99b094b44. May 27 03:55:21.036583 containerd[1541]: time="2025-05-27T03:55:21.036562223Z" level=info msg="CreateContainer within sandbox \"bc2a837d6d1cfd7ed88afad23749ff9ec043d165791edfbf49d9f2da165645e4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c29a3e75a0d502a75fcf074debc44f1390c39c6448755b668d9170242cb4694\"" May 27 03:55:21.038035 containerd[1541]: time="2025-05-27T03:55:21.037993820Z" level=info msg="StartContainer for \"2c29a3e75a0d502a75fcf074debc44f1390c39c6448755b668d9170242cb4694\"" May 27 03:55:21.040138 containerd[1541]: time="2025-05-27T03:55:21.040100266Z" level=info msg="connecting to shim 2c29a3e75a0d502a75fcf074debc44f1390c39c6448755b668d9170242cb4694" address="unix:///run/containerd/s/12a85d788cc659a58326e538636453a617e60a5fafceb0046c29ac3e1469986e" protocol=ttrpc version=3 May 27 03:55:21.052103 systemd[1]: Started cri-containerd-b4088219bc30bb1e32bae144f5721209c2ac6fe4fbd2422bd4bcd2237155c77d.scope - libcontainer container b4088219bc30bb1e32bae144f5721209c2ac6fe4fbd2422bd4bcd2237155c77d. May 27 03:55:21.085011 systemd[1]: Started cri-containerd-2c29a3e75a0d502a75fcf074debc44f1390c39c6448755b668d9170242cb4694.scope - libcontainer container 2c29a3e75a0d502a75fcf074debc44f1390c39c6448755b668d9170242cb4694. May 27 03:55:21.122984 containerd[1541]: time="2025-05-27T03:55:21.122955749Z" level=info msg="StartContainer for \"42a302be6afd1397aa34a2163afe4b298ea047e1bbeab2ec908179b99b094b44\" returns successfully" May 27 03:55:21.136788 kubelet[2357]: I0527 03:55:21.136767 2357 kubelet_node_status.go:75] "Attempting to register node" node="172-234-212-30" May 27 03:55:21.137236 kubelet[2357]: E0527 03:55:21.137210 2357 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.212.30:6443/api/v1/nodes\": dial tcp 172.234.212.30:6443: connect: connection refused" node="172-234-212-30" May 27 03:55:21.172576 containerd[1541]: time="2025-05-27T03:55:21.172534000Z" level=info msg="StartContainer for \"2c29a3e75a0d502a75fcf074debc44f1390c39c6448755b668d9170242cb4694\" returns successfully" May 27 03:55:21.191427 containerd[1541]: time="2025-05-27T03:55:21.191369781Z" level=info msg="StartContainer for \"b4088219bc30bb1e32bae144f5721209c2ac6fe4fbd2422bd4bcd2237155c77d\" returns successfully" May 27 03:55:21.397342 kubelet[2357]: E0527 03:55:21.397243 2357 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-212-30\" not found" node="172-234-212-30" May 27 03:55:21.397451 kubelet[2357]: E0527 03:55:21.397403 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:21.398295 kubelet[2357]: E0527 03:55:21.397563 2357 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-212-30\" not found" node="172-234-212-30" May 27 03:55:21.398295 kubelet[2357]: E0527 03:55:21.397644 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:21.402925 kubelet[2357]: E0527 03:55:21.402639 2357 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-212-30\" not found" node="172-234-212-30" May 27 03:55:21.403099 kubelet[2357]: E0527 03:55:21.403086 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:21.942857 kubelet[2357]: I0527 03:55:21.942824 2357 kubelet_node_status.go:75] "Attempting to register node" node="172-234-212-30" May 27 03:55:22.355674 kubelet[2357]: I0527 03:55:22.355190 2357 apiserver.go:52] "Watching apiserver" May 27 03:55:22.367084 kubelet[2357]: E0527 03:55:22.367049 2357 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-212-30\" not found" node="172-234-212-30" May 27 03:55:22.404389 kubelet[2357]: E0527 03:55:22.404355 2357 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-212-30\" not found" node="172-234-212-30" May 27 03:55:22.404466 kubelet[2357]: E0527 03:55:22.404457 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:22.404730 kubelet[2357]: E0527 03:55:22.404714 2357 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-212-30\" not found" node="172-234-212-30" May 27 03:55:22.404807 kubelet[2357]: E0527 03:55:22.404790 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:22.461717 kubelet[2357]: I0527 03:55:22.461680 2357 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:55:22.512756 kubelet[2357]: I0527 03:55:22.512716 2357 kubelet_node_status.go:78] "Successfully registered node" node="172-234-212-30" May 27 03:55:22.512756 kubelet[2357]: E0527 03:55:22.512750 2357 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-234-212-30\": node \"172-234-212-30\" not found" May 27 03:55:22.563211 kubelet[2357]: I0527 03:55:22.563153 2357 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:22.568134 kubelet[2357]: E0527 03:55:22.568097 2357 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-212-30\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:22.568134 kubelet[2357]: I0527 03:55:22.568121 2357 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-212-30" May 27 03:55:22.570091 kubelet[2357]: E0527 03:55:22.570070 2357 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-212-30\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-212-30" May 27 03:55:22.570091 kubelet[2357]: I0527 03:55:22.570088 2357 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-212-30" May 27 03:55:22.571433 kubelet[2357]: E0527 03:55:22.571415 2357 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-212-30\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-212-30" May 27 03:55:23.403387 kubelet[2357]: I0527 03:55:23.403359 2357 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-212-30" May 27 03:55:23.409568 kubelet[2357]: E0527 03:55:23.409485 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:24.394642 systemd[1]: Reload requested from client PID 2624 ('systemctl') (unit session-7.scope)... May 27 03:55:24.394661 systemd[1]: Reloading... May 27 03:55:24.406756 kubelet[2357]: E0527 03:55:24.406653 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:24.497910 zram_generator::config[2668]: No configuration found. May 27 03:55:24.597589 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:55:24.720520 systemd[1]: Reloading finished in 325 ms. May 27 03:55:24.759620 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:55:24.781635 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:55:24.781930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:55:24.781973 systemd[1]: kubelet.service: Consumed 778ms CPU time, 130.5M memory peak. May 27 03:55:24.785537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:55:24.974586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:55:24.982399 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:55:25.027950 kubelet[2719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:55:25.027950 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:55:25.027950 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:55:25.028447 kubelet[2719]: I0527 03:55:25.028107 2719 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:55:25.037626 kubelet[2719]: I0527 03:55:25.037595 2719 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:55:25.037626 kubelet[2719]: I0527 03:55:25.037616 2719 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:55:25.037822 kubelet[2719]: I0527 03:55:25.037797 2719 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:55:25.038902 kubelet[2719]: I0527 03:55:25.038862 2719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 03:55:25.041054 kubelet[2719]: I0527 03:55:25.040656 2719 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:55:25.044840 kubelet[2719]: I0527 03:55:25.044810 2719 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:55:25.047967 kubelet[2719]: I0527 03:55:25.047938 2719 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:55:25.048159 kubelet[2719]: I0527 03:55:25.048128 2719 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:55:25.051641 kubelet[2719]: I0527 03:55:25.048154 2719 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-212-30","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:55:25.051641 kubelet[2719]: I0527 03:55:25.050518 2719 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:55:25.051641 kubelet[2719]: I0527 03:55:25.050533 2719 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:55:25.051641 kubelet[2719]: I0527 03:55:25.050576 2719 state_mem.go:36] "Initialized new in-memory state store" May 27 03:55:25.051641 kubelet[2719]: I0527 03:55:25.050725 2719 kubelet.go:446] "Attempting to sync node with API server" May 27 03:55:25.051824 kubelet[2719]: I0527 03:55:25.050744 2719 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:55:25.051824 kubelet[2719]: I0527 03:55:25.050767 2719 kubelet.go:352] "Adding apiserver pod source" May 27 03:55:25.051824 kubelet[2719]: I0527 03:55:25.050776 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:55:25.054978 kubelet[2719]: I0527 03:55:25.054951 2719 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:55:25.055302 kubelet[2719]: I0527 03:55:25.055279 2719 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:55:25.055715 kubelet[2719]: I0527 03:55:25.055677 2719 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:55:25.055715 kubelet[2719]: I0527 03:55:25.055709 2719 server.go:1287] "Started kubelet" May 27 03:55:25.058344 kubelet[2719]: I0527 03:55:25.058298 2719 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:55:25.060891 kubelet[2719]: I0527 03:55:25.059742 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:55:25.067891 kubelet[2719]: I0527 03:55:25.066437 2719 server.go:479] "Adding debug handlers to kubelet server" May 27 03:55:25.068967 kubelet[2719]: I0527 03:55:25.068236 2719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:55:25.069201 kubelet[2719]: I0527 03:55:25.069131 2719 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:55:25.069516 kubelet[2719]: I0527 03:55:25.069497 2719 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:55:25.071699 kubelet[2719]: I0527 03:55:25.071671 2719 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:55:25.072002 kubelet[2719]: E0527 03:55:25.071969 2719 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-212-30\" not found" May 27 03:55:25.074346 kubelet[2719]: I0527 03:55:25.073867 2719 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:55:25.074346 kubelet[2719]: I0527 03:55:25.074007 2719 reconciler.go:26] "Reconciler: start to sync state" May 27 03:55:25.077012 kubelet[2719]: I0527 03:55:25.076955 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:55:25.081745 kubelet[2719]: I0527 03:55:25.081714 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:55:25.081794 kubelet[2719]: I0527 03:55:25.081755 2719 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:55:25.081794 kubelet[2719]: I0527 03:55:25.081774 2719 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:55:25.081794 kubelet[2719]: I0527 03:55:25.081781 2719 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:55:25.081861 kubelet[2719]: E0527 03:55:25.081828 2719 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:55:25.082007 kubelet[2719]: I0527 03:55:25.081990 2719 factory.go:221] Registration of the systemd container factory successfully May 27 03:55:25.082200 kubelet[2719]: I0527 03:55:25.082180 2719 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:55:25.090111 kubelet[2719]: I0527 03:55:25.090097 2719 factory.go:221] Registration of the containerd container factory successfully May 27 03:55:25.090851 kubelet[2719]: E0527 03:55:25.090517 2719 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:55:25.144729 kubelet[2719]: I0527 03:55:25.144711 2719 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:55:25.144814 kubelet[2719]: I0527 03:55:25.144803 2719 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:55:25.144867 kubelet[2719]: I0527 03:55:25.144859 2719 state_mem.go:36] "Initialized new in-memory state store" May 27 03:55:25.145056 kubelet[2719]: I0527 03:55:25.145043 2719 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:55:25.145119 kubelet[2719]: I0527 03:55:25.145100 2719 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:55:25.145903 kubelet[2719]: I0527 03:55:25.145339 2719 policy_none.go:49] "None policy: Start" May 27 03:55:25.145903 kubelet[2719]: I0527 03:55:25.145350 2719 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:55:25.145903 kubelet[2719]: I0527 03:55:25.145359 2719 state_mem.go:35] "Initializing new in-memory state store" May 27 03:55:25.145903 kubelet[2719]: I0527 03:55:25.145445 2719 state_mem.go:75] "Updated machine memory state" May 27 03:55:25.150356 kubelet[2719]: I0527 03:55:25.150340 2719 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:55:25.150766 kubelet[2719]: I0527 03:55:25.150753 2719 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:55:25.150841 kubelet[2719]: I0527 03:55:25.150819 2719 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:55:25.151046 kubelet[2719]: I0527 03:55:25.151034 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:55:25.152512 kubelet[2719]: E0527 03:55:25.152498 2719 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:55:25.182574 kubelet[2719]: I0527 03:55:25.182511 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:25.182733 kubelet[2719]: I0527 03:55:25.182720 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-212-30" May 27 03:55:25.182856 kubelet[2719]: I0527 03:55:25.182831 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-212-30" May 27 03:55:25.190979 kubelet[2719]: E0527 03:55:25.190936 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-212-30\" already exists" pod="kube-system/kube-scheduler-172-234-212-30" May 27 03:55:25.254189 kubelet[2719]: I0527 03:55:25.254073 2719 kubelet_node_status.go:75] "Attempting to register node" node="172-234-212-30" May 27 03:55:25.260728 kubelet[2719]: I0527 03:55:25.260441 2719 kubelet_node_status.go:124] "Node was previously registered" node="172-234-212-30" May 27 03:55:25.260728 kubelet[2719]: I0527 03:55:25.260515 2719 kubelet_node_status.go:78] "Successfully registered node" node="172-234-212-30" May 27 03:55:25.275530 kubelet[2719]: I0527 03:55:25.275504 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbd567f844f73220e6f3f8e9272201ec-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-212-30\" (UID: \"fbd567f844f73220e6f3f8e9272201ec\") " pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:25.275627 kubelet[2719]: I0527 03:55:25.275535 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dbb5adff9f5c11e328778536c4c55d3e-kubeconfig\") pod \"kube-scheduler-172-234-212-30\" (UID: \"dbb5adff9f5c11e328778536c4c55d3e\") " pod="kube-system/kube-scheduler-172-234-212-30" May 27 03:55:25.275627 kubelet[2719]: I0527 03:55:25.275555 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1173a5928226451147dd4965136524fe-ca-certs\") pod \"kube-apiserver-172-234-212-30\" (UID: \"1173a5928226451147dd4965136524fe\") " pod="kube-system/kube-apiserver-172-234-212-30" May 27 03:55:25.275627 kubelet[2719]: I0527 03:55:25.275571 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1173a5928226451147dd4965136524fe-k8s-certs\") pod \"kube-apiserver-172-234-212-30\" (UID: \"1173a5928226451147dd4965136524fe\") " pod="kube-system/kube-apiserver-172-234-212-30" May 27 03:55:25.275627 kubelet[2719]: I0527 03:55:25.275587 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbd567f844f73220e6f3f8e9272201ec-ca-certs\") pod \"kube-controller-manager-172-234-212-30\" (UID: \"fbd567f844f73220e6f3f8e9272201ec\") " pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:25.275627 kubelet[2719]: I0527 03:55:25.275604 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbd567f844f73220e6f3f8e9272201ec-k8s-certs\") pod \"kube-controller-manager-172-234-212-30\" (UID: \"fbd567f844f73220e6f3f8e9272201ec\") " pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:25.275759 kubelet[2719]: I0527 03:55:25.275620 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1173a5928226451147dd4965136524fe-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-212-30\" (UID: \"1173a5928226451147dd4965136524fe\") " pod="kube-system/kube-apiserver-172-234-212-30" May 27 03:55:25.275759 kubelet[2719]: I0527 03:55:25.275638 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fbd567f844f73220e6f3f8e9272201ec-flexvolume-dir\") pod \"kube-controller-manager-172-234-212-30\" (UID: \"fbd567f844f73220e6f3f8e9272201ec\") " pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:25.275759 kubelet[2719]: I0527 03:55:25.275654 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fbd567f844f73220e6f3f8e9272201ec-kubeconfig\") pod \"kube-controller-manager-172-234-212-30\" (UID: \"fbd567f844f73220e6f3f8e9272201ec\") " pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:25.395236 sudo[2752]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 03:55:25.395796 sudo[2752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 03:55:25.489856 kubelet[2719]: E0527 03:55:25.489059 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:25.491233 kubelet[2719]: E0527 03:55:25.491216 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:25.491707 kubelet[2719]: E0527 03:55:25.491422 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:25.882710 sudo[2752]: pam_unix(sudo:session): session closed for user root May 27 03:55:26.059530 kubelet[2719]: I0527 03:55:26.059475 2719 apiserver.go:52] "Watching apiserver" May 27 03:55:26.075409 kubelet[2719]: I0527 03:55:26.075074 2719 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:55:26.079979 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 27 03:55:26.121083 kubelet[2719]: I0527 03:55:26.117960 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:26.121083 kubelet[2719]: I0527 03:55:26.118274 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-212-30" May 27 03:55:26.121083 kubelet[2719]: I0527 03:55:26.118451 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-212-30" May 27 03:55:26.129252 kubelet[2719]: E0527 03:55:26.129234 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-212-30\" already exists" pod="kube-system/kube-scheduler-172-234-212-30" May 27 03:55:26.130643 kubelet[2719]: E0527 03:55:26.130628 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-212-30\" already exists" pod="kube-system/kube-controller-manager-172-234-212-30" May 27 03:55:26.131549 kubelet[2719]: E0527 03:55:26.131047 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:26.132646 kubelet[2719]: E0527 03:55:26.132632 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:26.133353 kubelet[2719]: E0527 03:55:26.133083 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-212-30\" already exists" pod="kube-system/kube-apiserver-172-234-212-30" May 27 03:55:26.136420 kubelet[2719]: E0527 03:55:26.136405 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:26.209275 kubelet[2719]: I0527 03:55:26.209230 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-212-30" podStartSLOduration=1.209202449 podStartE2EDuration="1.209202449s" podCreationTimestamp="2025-05-27 03:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:55:26.186813887 +0000 UTC m=+1.198465600" watchObservedRunningTime="2025-05-27 03:55:26.209202449 +0000 UTC m=+1.220854152" May 27 03:55:26.231819 kubelet[2719]: I0527 03:55:26.231742 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-212-30" podStartSLOduration=3.231718258 podStartE2EDuration="3.231718258s" podCreationTimestamp="2025-05-27 03:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:55:26.210229536 +0000 UTC m=+1.221881249" watchObservedRunningTime="2025-05-27 03:55:26.231718258 +0000 UTC m=+1.243369961" May 27 03:55:27.121779 kubelet[2719]: E0527 03:55:27.120931 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:27.121779 kubelet[2719]: E0527 03:55:27.121534 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:27.121779 kubelet[2719]: E0527 03:55:27.121724 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:27.168929 sudo[1802]: pam_unix(sudo:session): session closed for user root May 27 03:55:27.220471 sshd[1801]: Connection closed by 139.178.68.195 port 60326 May 27 03:55:27.220946 sshd-session[1799]: pam_unix(sshd:session): session closed for user core May 27 03:55:27.225846 systemd[1]: sshd@6-172.234.212.30:22-139.178.68.195:60326.service: Deactivated successfully. May 27 03:55:27.228323 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:55:27.228524 systemd[1]: session-7.scope: Consumed 3.995s CPU time, 269M memory peak. May 27 03:55:27.230242 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. May 27 03:55:27.232363 systemd-logind[1514]: Removed session 7. May 27 03:55:28.123358 kubelet[2719]: E0527 03:55:28.123320 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:28.896260 kubelet[2719]: E0527 03:55:28.896230 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:29.322001 kubelet[2719]: E0527 03:55:29.321967 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:30.840169 kubelet[2719]: I0527 03:55:30.840139 2719 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:55:30.840560 containerd[1541]: time="2025-05-27T03:55:30.840518504Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:55:30.840783 kubelet[2719]: I0527 03:55:30.840710 2719 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:55:31.519156 kubelet[2719]: I0527 03:55:31.519005 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-212-30" podStartSLOduration=6.518943436 podStartE2EDuration="6.518943436s" podCreationTimestamp="2025-05-27 03:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:55:26.233057052 +0000 UTC m=+1.244708765" watchObservedRunningTime="2025-05-27 03:55:31.518943436 +0000 UTC m=+6.530595139" May 27 03:55:31.530868 systemd[1]: Created slice kubepods-besteffort-pod50f7560d_65c0_472e_aaa2_aa740ee54c67.slice - libcontainer container kubepods-besteffort-pod50f7560d_65c0_472e_aaa2_aa740ee54c67.slice. May 27 03:55:31.550133 systemd[1]: Created slice kubepods-burstable-pod3bd2b0ce_f53c_403f_999d_f88cb9399e82.slice - libcontainer container kubepods-burstable-pod3bd2b0ce_f53c_403f_999d_f88cb9399e82.slice. May 27 03:55:31.614068 kubelet[2719]: I0527 03:55:31.614031 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-xtables-lock\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.614409 kubelet[2719]: I0527 03:55:31.614178 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bd2b0ce-f53c-403f-999d-f88cb9399e82-clustermesh-secrets\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.614409 kubelet[2719]: I0527 03:55:31.614198 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-host-proc-sys-net\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.614409 kubelet[2719]: I0527 03:55:31.614349 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdwtg\" (UniqueName: \"kubernetes.io/projected/3bd2b0ce-f53c-403f-999d-f88cb9399e82-kube-api-access-rdwtg\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.614595 kubelet[2719]: I0527 03:55:31.614493 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-etc-cni-netd\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.614595 kubelet[2719]: I0527 03:55:31.614513 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cilium-run\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.614595 kubelet[2719]: I0527 03:55:31.614528 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-hostproc\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.614778 kubelet[2719]: I0527 03:55:31.614647 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-host-proc-sys-kernel\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.614778 kubelet[2719]: I0527 03:55:31.614674 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50f7560d-65c0-472e-aaa2-aa740ee54c67-lib-modules\") pod \"kube-proxy-8v5xs\" (UID: \"50f7560d-65c0-472e-aaa2-aa740ee54c67\") " pod="kube-system/kube-proxy-8v5xs" May 27 03:55:31.614946 kubelet[2719]: I0527 03:55:31.614698 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50f7560d-65c0-472e-aaa2-aa740ee54c67-kube-proxy\") pod \"kube-proxy-8v5xs\" (UID: \"50f7560d-65c0-472e-aaa2-aa740ee54c67\") " pod="kube-system/kube-proxy-8v5xs" May 27 03:55:31.614946 kubelet[2719]: I0527 03:55:31.614832 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cilium-config-path\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.614946 kubelet[2719]: I0527 03:55:31.614857 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bd2b0ce-f53c-403f-999d-f88cb9399e82-hubble-tls\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.615695 kubelet[2719]: I0527 03:55:31.615194 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50f7560d-65c0-472e-aaa2-aa740ee54c67-xtables-lock\") pod \"kube-proxy-8v5xs\" (UID: \"50f7560d-65c0-472e-aaa2-aa740ee54c67\") " pod="kube-system/kube-proxy-8v5xs" May 27 03:55:31.615695 kubelet[2719]: I0527 03:55:31.615215 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-bpf-maps\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.615695 kubelet[2719]: I0527 03:55:31.615269 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cilium-cgroup\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.615695 kubelet[2719]: I0527 03:55:31.615288 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9mts\" (UniqueName: \"kubernetes.io/projected/50f7560d-65c0-472e-aaa2-aa740ee54c67-kube-api-access-v9mts\") pod \"kube-proxy-8v5xs\" (UID: \"50f7560d-65c0-472e-aaa2-aa740ee54c67\") " pod="kube-system/kube-proxy-8v5xs" May 27 03:55:31.615695 kubelet[2719]: I0527 03:55:31.615310 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cni-path\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.615695 kubelet[2719]: I0527 03:55:31.615402 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-lib-modules\") pod \"cilium-zn6rg\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " pod="kube-system/cilium-zn6rg" May 27 03:55:31.842135 kubelet[2719]: E0527 03:55:31.842016 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:31.843648 containerd[1541]: time="2025-05-27T03:55:31.842908577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8v5xs,Uid:50f7560d-65c0-472e-aaa2-aa740ee54c67,Namespace:kube-system,Attempt:0,}" May 27 03:55:31.853867 kubelet[2719]: E0527 03:55:31.853840 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:31.857489 containerd[1541]: time="2025-05-27T03:55:31.857451133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zn6rg,Uid:3bd2b0ce-f53c-403f-999d-f88cb9399e82,Namespace:kube-system,Attempt:0,}" May 27 03:55:31.871568 containerd[1541]: time="2025-05-27T03:55:31.871421748Z" level=info msg="connecting to shim 66e7a149aebb384a52ec930287fdc56a675a02a2d8da1e2e5960bbd2af0e1e0a" address="unix:///run/containerd/s/f7c11ba8a3219f4f1b87f83e88b119f63eeec9a08f0fa7d11ad3a9a98fb70139" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:31.881294 containerd[1541]: time="2025-05-27T03:55:31.881225898Z" level=info msg="connecting to shim 77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0" address="unix:///run/containerd/s/5fb0c5dbd84f303862f648d4beac8f00032dcc119d88ab406c590c122b171335" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:31.908019 systemd[1]: Started cri-containerd-66e7a149aebb384a52ec930287fdc56a675a02a2d8da1e2e5960bbd2af0e1e0a.scope - libcontainer container 66e7a149aebb384a52ec930287fdc56a675a02a2d8da1e2e5960bbd2af0e1e0a. May 27 03:55:31.913741 systemd[1]: Started cri-containerd-77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0.scope - libcontainer container 77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0. May 27 03:55:31.952435 containerd[1541]: time="2025-05-27T03:55:31.952379510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zn6rg,Uid:3bd2b0ce-f53c-403f-999d-f88cb9399e82,Namespace:kube-system,Attempt:0,} returns sandbox id \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\"" May 27 03:55:31.953584 kubelet[2719]: E0527 03:55:31.953198 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:31.954844 containerd[1541]: time="2025-05-27T03:55:31.954771877Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 03:55:31.966315 containerd[1541]: time="2025-05-27T03:55:31.965321611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8v5xs,Uid:50f7560d-65c0-472e-aaa2-aa740ee54c67,Namespace:kube-system,Attempt:0,} returns sandbox id \"66e7a149aebb384a52ec930287fdc56a675a02a2d8da1e2e5960bbd2af0e1e0a\"" May 27 03:55:31.968215 kubelet[2719]: E0527 03:55:31.968195 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:31.971246 containerd[1541]: time="2025-05-27T03:55:31.971172745Z" level=info msg="CreateContainer within sandbox \"66e7a149aebb384a52ec930287fdc56a675a02a2d8da1e2e5960bbd2af0e1e0a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:55:31.982374 containerd[1541]: time="2025-05-27T03:55:31.982342141Z" level=info msg="Container bc96c0e8c72e9568cbb9260411b4ef3a560f2c3459b0fc27857ba01a05ebd0b6: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:31.988812 containerd[1541]: time="2025-05-27T03:55:31.988394410Z" level=info msg="CreateContainer within sandbox \"66e7a149aebb384a52ec930287fdc56a675a02a2d8da1e2e5960bbd2af0e1e0a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc96c0e8c72e9568cbb9260411b4ef3a560f2c3459b0fc27857ba01a05ebd0b6\"" May 27 03:55:31.989937 containerd[1541]: time="2025-05-27T03:55:31.989103748Z" level=info msg="StartContainer for \"bc96c0e8c72e9568cbb9260411b4ef3a560f2c3459b0fc27857ba01a05ebd0b6\"" May 27 03:55:31.994031 containerd[1541]: time="2025-05-27T03:55:31.993978132Z" level=info msg="connecting to shim bc96c0e8c72e9568cbb9260411b4ef3a560f2c3459b0fc27857ba01a05ebd0b6" address="unix:///run/containerd/s/f7c11ba8a3219f4f1b87f83e88b119f63eeec9a08f0fa7d11ad3a9a98fb70139" protocol=ttrpc version=3 May 27 03:55:32.018372 systemd[1]: Created slice kubepods-besteffort-podcc40d681_1020_4117_8945_1be416a58bee.slice - libcontainer container kubepods-besteffort-podcc40d681_1020_4117_8945_1be416a58bee.slice. May 27 03:55:32.018628 kubelet[2719]: I0527 03:55:32.018597 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxnb4\" (UniqueName: \"kubernetes.io/projected/cc40d681-1020-4117-8945-1be416a58bee-kube-api-access-hxnb4\") pod \"cilium-operator-6c4d7847fc-dhgsv\" (UID: \"cc40d681-1020-4117-8945-1be416a58bee\") " pod="kube-system/cilium-operator-6c4d7847fc-dhgsv" May 27 03:55:32.018681 kubelet[2719]: I0527 03:55:32.018632 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc40d681-1020-4117-8945-1be416a58bee-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dhgsv\" (UID: \"cc40d681-1020-4117-8945-1be416a58bee\") " pod="kube-system/cilium-operator-6c4d7847fc-dhgsv" May 27 03:55:32.025780 kubelet[2719]: I0527 03:55:32.025105 2719 status_manager.go:890] "Failed to get status for pod" podUID="cc40d681-1020-4117-8945-1be416a58bee" pod="kube-system/cilium-operator-6c4d7847fc-dhgsv" err="pods \"cilium-operator-6c4d7847fc-dhgsv\" is forbidden: User \"system:node:172-234-212-30\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-212-30' and this object" May 27 03:55:32.035546 systemd[1]: Started cri-containerd-bc96c0e8c72e9568cbb9260411b4ef3a560f2c3459b0fc27857ba01a05ebd0b6.scope - libcontainer container bc96c0e8c72e9568cbb9260411b4ef3a560f2c3459b0fc27857ba01a05ebd0b6. May 27 03:55:32.124092 containerd[1541]: time="2025-05-27T03:55:32.123612296Z" level=info msg="StartContainer for \"bc96c0e8c72e9568cbb9260411b4ef3a560f2c3459b0fc27857ba01a05ebd0b6\" returns successfully" May 27 03:55:32.138460 kubelet[2719]: E0527 03:55:32.138090 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:32.156176 kubelet[2719]: I0527 03:55:32.156132 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8v5xs" podStartSLOduration=1.156115926 podStartE2EDuration="1.156115926s" podCreationTimestamp="2025-05-27 03:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:55:32.147902224 +0000 UTC m=+7.159553937" watchObservedRunningTime="2025-05-27 03:55:32.156115926 +0000 UTC m=+7.167767629" May 27 03:55:32.325718 kubelet[2719]: E0527 03:55:32.325671 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:32.326152 containerd[1541]: time="2025-05-27T03:55:32.326078550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dhgsv,Uid:cc40d681-1020-4117-8945-1be416a58bee,Namespace:kube-system,Attempt:0,}" May 27 03:55:32.344131 containerd[1541]: time="2025-05-27T03:55:32.344094017Z" level=info msg="connecting to shim 68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2" address="unix:///run/containerd/s/1fe83c0c58d225a9577626a78ae97809dbfa8b2903aa3e89e15999bc86de3d38" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:32.372033 systemd[1]: Started cri-containerd-68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2.scope - libcontainer container 68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2. May 27 03:55:32.435407 containerd[1541]: time="2025-05-27T03:55:32.435370031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dhgsv,Uid:cc40d681-1020-4117-8945-1be416a58bee,Namespace:kube-system,Attempt:0,} returns sandbox id \"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\"" May 27 03:55:32.436758 kubelet[2719]: E0527 03:55:32.436539 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:35.692290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875473997.mount: Deactivated successfully. May 27 03:55:36.840399 kubelet[2719]: E0527 03:55:36.840359 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:37.156923 kubelet[2719]: E0527 03:55:37.152899 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:37.410722 containerd[1541]: time="2025-05-27T03:55:37.410526187Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:37.413064 containerd[1541]: time="2025-05-27T03:55:37.413002999Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 27 03:55:37.414693 containerd[1541]: time="2025-05-27T03:55:37.413766442Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:37.415119 containerd[1541]: time="2025-05-27T03:55:37.415084722Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.460282423s" May 27 03:55:37.415163 containerd[1541]: time="2025-05-27T03:55:37.415119743Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 27 03:55:37.419134 containerd[1541]: time="2025-05-27T03:55:37.419103274Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 03:55:37.421695 containerd[1541]: time="2025-05-27T03:55:37.421672015Z" level=info msg="CreateContainer within sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 03:55:37.431088 containerd[1541]: time="2025-05-27T03:55:37.431063829Z" level=info msg="Container 5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:37.435374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount466287996.mount: Deactivated successfully. May 27 03:55:37.442720 containerd[1541]: time="2025-05-27T03:55:37.442683362Z" level=info msg="CreateContainer within sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\"" May 27 03:55:37.443574 containerd[1541]: time="2025-05-27T03:55:37.443504892Z" level=info msg="StartContainer for \"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\"" May 27 03:55:37.444636 containerd[1541]: time="2025-05-27T03:55:37.444617360Z" level=info msg="connecting to shim 5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb" address="unix:///run/containerd/s/5fb0c5dbd84f303862f648d4beac8f00032dcc119d88ab406c590c122b171335" protocol=ttrpc version=3 May 27 03:55:37.474019 systemd[1]: Started cri-containerd-5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb.scope - libcontainer container 5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb. May 27 03:55:37.508900 containerd[1541]: time="2025-05-27T03:55:37.507536786Z" level=info msg="StartContainer for \"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\" returns successfully" May 27 03:55:37.519219 systemd[1]: cri-containerd-5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb.scope: Deactivated successfully. May 27 03:55:37.521434 containerd[1541]: time="2025-05-27T03:55:37.521402370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\" id:\"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\" pid:3137 exited_at:{seconds:1748318137 nanos:521003809}" May 27 03:55:37.521508 containerd[1541]: time="2025-05-27T03:55:37.521490087Z" level=info msg="received exit event container_id:\"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\" id:\"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\" pid:3137 exited_at:{seconds:1748318137 nanos:521003809}" May 27 03:55:37.541963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb-rootfs.mount: Deactivated successfully. May 27 03:55:38.157180 kubelet[2719]: E0527 03:55:38.155005 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:38.157690 containerd[1541]: time="2025-05-27T03:55:38.157208078Z" level=info msg="CreateContainer within sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 03:55:38.179650 containerd[1541]: time="2025-05-27T03:55:38.179169434Z" level=info msg="Container 1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:38.186060 containerd[1541]: time="2025-05-27T03:55:38.186014388Z" level=info msg="CreateContainer within sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\"" May 27 03:55:38.186662 containerd[1541]: time="2025-05-27T03:55:38.186605609Z" level=info msg="StartContainer for \"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\"" May 27 03:55:38.189486 containerd[1541]: time="2025-05-27T03:55:38.189455151Z" level=info msg="connecting to shim 1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731" address="unix:///run/containerd/s/5fb0c5dbd84f303862f648d4beac8f00032dcc119d88ab406c590c122b171335" protocol=ttrpc version=3 May 27 03:55:38.217003 systemd[1]: Started cri-containerd-1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731.scope - libcontainer container 1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731. May 27 03:55:38.246922 containerd[1541]: time="2025-05-27T03:55:38.245511832Z" level=info msg="StartContainer for \"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\" returns successfully" May 27 03:55:38.262777 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:55:38.263362 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:55:38.263951 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 03:55:38.265813 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:55:38.266189 systemd[1]: cri-containerd-1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731.scope: Deactivated successfully. May 27 03:55:38.267068 containerd[1541]: time="2025-05-27T03:55:38.266806845Z" level=info msg="received exit event container_id:\"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\" id:\"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\" pid:3183 exited_at:{seconds:1748318138 nanos:266286345}" May 27 03:55:38.268593 containerd[1541]: time="2025-05-27T03:55:38.267242460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\" id:\"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\" pid:3183 exited_at:{seconds:1748318138 nanos:266286345}" May 27 03:55:38.299986 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:55:38.807541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389304649.mount: Deactivated successfully. May 27 03:55:38.906602 kubelet[2719]: E0527 03:55:38.906476 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:39.162318 kubelet[2719]: E0527 03:55:39.161717 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:39.168056 containerd[1541]: time="2025-05-27T03:55:39.168007659Z" level=info msg="CreateContainer within sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 03:55:39.183076 containerd[1541]: time="2025-05-27T03:55:39.183027903Z" level=info msg="Container 83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:39.190239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4127225091.mount: Deactivated successfully. May 27 03:55:39.195262 containerd[1541]: time="2025-05-27T03:55:39.195185115Z" level=info msg="CreateContainer within sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\"" May 27 03:55:39.196908 containerd[1541]: time="2025-05-27T03:55:39.195809575Z" level=info msg="StartContainer for \"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\"" May 27 03:55:39.196979 containerd[1541]: time="2025-05-27T03:55:39.196861463Z" level=info msg="connecting to shim 83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a" address="unix:///run/containerd/s/5fb0c5dbd84f303862f648d4beac8f00032dcc119d88ab406c590c122b171335" protocol=ttrpc version=3 May 27 03:55:39.235163 systemd[1]: Started cri-containerd-83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a.scope - libcontainer container 83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a. May 27 03:55:39.318391 systemd[1]: cri-containerd-83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a.scope: Deactivated successfully. May 27 03:55:39.320892 containerd[1541]: time="2025-05-27T03:55:39.320797005Z" level=info msg="StartContainer for \"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\" returns successfully" May 27 03:55:39.327244 containerd[1541]: time="2025-05-27T03:55:39.327038074Z" level=info msg="received exit event container_id:\"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\" id:\"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\" pid:3242 exited_at:{seconds:1748318139 nanos:325715562}" May 27 03:55:39.329061 containerd[1541]: time="2025-05-27T03:55:39.328904866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\" id:\"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\" pid:3242 exited_at:{seconds:1748318139 nanos:325715562}" May 27 03:55:39.337557 kubelet[2719]: E0527 03:55:39.336623 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:39.463103 containerd[1541]: time="2025-05-27T03:55:39.463046553Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:39.465624 containerd[1541]: time="2025-05-27T03:55:39.465569395Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 27 03:55:39.468851 containerd[1541]: time="2025-05-27T03:55:39.468536467Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:55:39.471496 containerd[1541]: time="2025-05-27T03:55:39.471464960Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.052241181s" May 27 03:55:39.471629 containerd[1541]: time="2025-05-27T03:55:39.471603588Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 27 03:55:39.479630 containerd[1541]: time="2025-05-27T03:55:39.479586415Z" level=info msg="CreateContainer within sandbox \"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 03:55:39.489866 containerd[1541]: time="2025-05-27T03:55:39.489829130Z" level=info msg="Container b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:39.493059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount855658713.mount: Deactivated successfully. May 27 03:55:39.499444 containerd[1541]: time="2025-05-27T03:55:39.499410825Z" level=info msg="CreateContainer within sandbox \"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\"" May 27 03:55:39.500589 containerd[1541]: time="2025-05-27T03:55:39.500494662Z" level=info msg="StartContainer for \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\"" May 27 03:55:39.501981 containerd[1541]: time="2025-05-27T03:55:39.501949481Z" level=info msg="connecting to shim b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70" address="unix:///run/containerd/s/1fe83c0c58d225a9577626a78ae97809dbfa8b2903aa3e89e15999bc86de3d38" protocol=ttrpc version=3 May 27 03:55:39.530037 systemd[1]: Started cri-containerd-b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70.scope - libcontainer container b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70. May 27 03:55:39.573026 containerd[1541]: time="2025-05-27T03:55:39.572659931Z" level=info msg="StartContainer for \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\" returns successfully" May 27 03:55:39.799186 update_engine[1518]: I20250527 03:55:39.798914 1518 update_attempter.cc:509] Updating boot flags... May 27 03:55:40.172395 kubelet[2719]: E0527 03:55:40.172018 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:40.177052 kubelet[2719]: E0527 03:55:40.177004 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:40.177233 containerd[1541]: time="2025-05-27T03:55:40.177182474Z" level=info msg="CreateContainer within sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 03:55:40.192826 containerd[1541]: time="2025-05-27T03:55:40.192785386Z" level=info msg="Container 583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:40.202925 containerd[1541]: time="2025-05-27T03:55:40.202433248Z" level=info msg="CreateContainer within sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\"" May 27 03:55:40.205867 containerd[1541]: time="2025-05-27T03:55:40.205846366Z" level=info msg="StartContainer for \"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\"" May 27 03:55:40.208029 containerd[1541]: time="2025-05-27T03:55:40.208009519Z" level=info msg="connecting to shim 583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743" address="unix:///run/containerd/s/5fb0c5dbd84f303862f648d4beac8f00032dcc119d88ab406c590c122b171335" protocol=ttrpc version=3 May 27 03:55:40.237025 systemd[1]: Started cri-containerd-583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743.scope - libcontainer container 583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743. May 27 03:55:40.341321 containerd[1541]: time="2025-05-27T03:55:40.341256216Z" level=info msg="StartContainer for \"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\" returns successfully" May 27 03:55:40.343035 systemd[1]: cri-containerd-583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743.scope: Deactivated successfully. May 27 03:55:40.344587 containerd[1541]: time="2025-05-27T03:55:40.344565818Z" level=info msg="received exit event container_id:\"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\" id:\"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\" pid:3339 exited_at:{seconds:1748318140 nanos:343909637}" May 27 03:55:40.345280 containerd[1541]: time="2025-05-27T03:55:40.345017996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\" id:\"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\" pid:3339 exited_at:{seconds:1748318140 nanos:343909637}" May 27 03:55:41.182258 kubelet[2719]: E0527 03:55:41.181180 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:41.182258 kubelet[2719]: E0527 03:55:41.181230 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:41.184011 containerd[1541]: time="2025-05-27T03:55:41.183473531Z" level=info msg="CreateContainer within sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 03:55:41.203276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708221644.mount: Deactivated successfully. May 27 03:55:41.205976 containerd[1541]: time="2025-05-27T03:55:41.205950075Z" level=info msg="Container 6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:41.213373 kubelet[2719]: I0527 03:55:41.213316 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dhgsv" podStartSLOduration=3.176114106 podStartE2EDuration="10.213292233s" podCreationTimestamp="2025-05-27 03:55:31 +0000 UTC" firstStartedPulling="2025-05-27 03:55:32.437440096 +0000 UTC m=+7.449091799" lastFinishedPulling="2025-05-27 03:55:39.474618223 +0000 UTC m=+14.486269926" observedRunningTime="2025-05-27 03:55:40.359795632 +0000 UTC m=+15.371447355" watchObservedRunningTime="2025-05-27 03:55:41.213292233 +0000 UTC m=+16.224943936" May 27 03:55:41.221192 containerd[1541]: time="2025-05-27T03:55:41.221127913Z" level=info msg="CreateContainer within sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\"" May 27 03:55:41.221816 containerd[1541]: time="2025-05-27T03:55:41.221785005Z" level=info msg="StartContainer for \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\"" May 27 03:55:41.224273 containerd[1541]: time="2025-05-27T03:55:41.224251476Z" level=info msg="connecting to shim 6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd" address="unix:///run/containerd/s/5fb0c5dbd84f303862f648d4beac8f00032dcc119d88ab406c590c122b171335" protocol=ttrpc version=3 May 27 03:55:41.250208 systemd[1]: Started cri-containerd-6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd.scope - libcontainer container 6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd. May 27 03:55:41.302040 containerd[1541]: time="2025-05-27T03:55:41.301988299Z" level=info msg="StartContainer for \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\" returns successfully" May 27 03:55:41.411633 containerd[1541]: time="2025-05-27T03:55:41.411583470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\" id:\"7cb2732dbf781b0721ac69fe66ec74e39f82e3d12577838dae5781e4bebb46cc\" pid:3408 exited_at:{seconds:1748318141 nanos:411291778}" May 27 03:55:41.448471 kubelet[2719]: I0527 03:55:41.448383 2719 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:55:41.492411 systemd[1]: Created slice kubepods-burstable-podc6d6dd4f_e6e0_41ba_8d4b_b78526c66db9.slice - libcontainer container kubepods-burstable-podc6d6dd4f_e6e0_41ba_8d4b_b78526c66db9.slice. May 27 03:55:41.499071 systemd[1]: Created slice kubepods-burstable-pod7ef678ff_1001_4cf0_8ff6_e696343f74f9.slice - libcontainer container kubepods-burstable-pod7ef678ff_1001_4cf0_8ff6_e696343f74f9.slice. May 27 03:55:41.502438 kubelet[2719]: I0527 03:55:41.501574 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6d6dd4f-e6e0-41ba-8d4b-b78526c66db9-config-volume\") pod \"coredns-668d6bf9bc-9thhk\" (UID: \"c6d6dd4f-e6e0-41ba-8d4b-b78526c66db9\") " pod="kube-system/coredns-668d6bf9bc-9thhk" May 27 03:55:41.502438 kubelet[2719]: I0527 03:55:41.501612 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2vxj\" (UniqueName: \"kubernetes.io/projected/c6d6dd4f-e6e0-41ba-8d4b-b78526c66db9-kube-api-access-t2vxj\") pod \"coredns-668d6bf9bc-9thhk\" (UID: \"c6d6dd4f-e6e0-41ba-8d4b-b78526c66db9\") " pod="kube-system/coredns-668d6bf9bc-9thhk" May 27 03:55:41.502646 kubelet[2719]: I0527 03:55:41.502591 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ef678ff-1001-4cf0-8ff6-e696343f74f9-config-volume\") pod \"coredns-668d6bf9bc-dxzzm\" (UID: \"7ef678ff-1001-4cf0-8ff6-e696343f74f9\") " pod="kube-system/coredns-668d6bf9bc-dxzzm" May 27 03:55:41.502730 kubelet[2719]: I0527 03:55:41.502717 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl4hn\" (UniqueName: \"kubernetes.io/projected/7ef678ff-1001-4cf0-8ff6-e696343f74f9-kube-api-access-tl4hn\") pod \"coredns-668d6bf9bc-dxzzm\" (UID: \"7ef678ff-1001-4cf0-8ff6-e696343f74f9\") " pod="kube-system/coredns-668d6bf9bc-dxzzm" May 27 03:55:41.804440 kubelet[2719]: E0527 03:55:41.803574 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:41.805067 kubelet[2719]: E0527 03:55:41.804641 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:41.806617 containerd[1541]: time="2025-05-27T03:55:41.806251300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dxzzm,Uid:7ef678ff-1001-4cf0-8ff6-e696343f74f9,Namespace:kube-system,Attempt:0,}" May 27 03:55:41.807024 containerd[1541]: time="2025-05-27T03:55:41.806762606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9thhk,Uid:c6d6dd4f-e6e0-41ba-8d4b-b78526c66db9,Namespace:kube-system,Attempt:0,}" May 27 03:55:42.194590 kubelet[2719]: E0527 03:55:42.194529 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:42.210714 kubelet[2719]: I0527 03:55:42.210629 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zn6rg" podStartSLOduration=5.747468253 podStartE2EDuration="11.210612819s" podCreationTimestamp="2025-05-27 03:55:31 +0000 UTC" firstStartedPulling="2025-05-27 03:55:31.954262733 +0000 UTC m=+6.965914436" lastFinishedPulling="2025-05-27 03:55:37.417407289 +0000 UTC m=+12.429059002" observedRunningTime="2025-05-27 03:55:42.209374447 +0000 UTC m=+17.221026170" watchObservedRunningTime="2025-05-27 03:55:42.210612819 +0000 UTC m=+17.222264523" May 27 03:55:43.197976 kubelet[2719]: E0527 03:55:43.197910 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:43.637400 systemd-networkd[1465]: cilium_host: Link UP May 27 03:55:43.638305 systemd-networkd[1465]: cilium_net: Link UP May 27 03:55:43.638740 systemd-networkd[1465]: cilium_net: Gained carrier May 27 03:55:43.640266 systemd-networkd[1465]: cilium_host: Gained carrier May 27 03:55:43.767679 systemd-networkd[1465]: cilium_vxlan: Link UP May 27 03:55:43.767690 systemd-networkd[1465]: cilium_vxlan: Gained carrier May 27 03:55:43.876207 systemd-networkd[1465]: cilium_host: Gained IPv6LL May 27 03:55:43.993720 kernel: NET: Registered PF_ALG protocol family May 27 03:55:44.203776 kubelet[2719]: E0527 03:55:44.203717 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:44.468320 systemd-networkd[1465]: cilium_net: Gained IPv6LL May 27 03:55:44.683915 systemd-networkd[1465]: lxc_health: Link UP May 27 03:55:44.684248 systemd-networkd[1465]: lxc_health: Gained carrier May 27 03:55:44.883923 kernel: eth0: renamed from tmpb14bc May 27 03:55:44.883223 systemd-networkd[1465]: lxc02ad49b5e951: Link UP May 27 03:55:44.897021 systemd-networkd[1465]: lxc02ad49b5e951: Gained carrier May 27 03:55:44.897211 systemd-networkd[1465]: lxce8be171f1642: Link UP May 27 03:55:44.899053 kernel: eth0: renamed from tmp0a329 May 27 03:55:44.904305 systemd-networkd[1465]: lxce8be171f1642: Gained carrier May 27 03:55:45.364076 systemd-networkd[1465]: cilium_vxlan: Gained IPv6LL May 27 03:55:45.857356 kubelet[2719]: E0527 03:55:45.856986 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:45.942251 systemd-networkd[1465]: lxc_health: Gained IPv6LL May 27 03:55:46.068337 systemd-networkd[1465]: lxc02ad49b5e951: Gained IPv6LL May 27 03:55:46.213711 kubelet[2719]: E0527 03:55:46.213600 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:46.644226 systemd-networkd[1465]: lxce8be171f1642: Gained IPv6LL May 27 03:55:47.214809 kubelet[2719]: E0527 03:55:47.214745 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:48.245223 containerd[1541]: time="2025-05-27T03:55:48.245102652Z" level=info msg="connecting to shim 0a3295a78438c57d92949c6315961e477bd8109a29fd9d54eb9fd7d479bc4264" address="unix:///run/containerd/s/39b96ad2f4b346ae842261a30bf889902df5283fa15999de85a9ff64e98d8f5b" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:48.255938 containerd[1541]: time="2025-05-27T03:55:48.255183326Z" level=info msg="connecting to shim b14bc9e489a23b8bef7fabedf9d5776f0c4f152502e54f40db73492f6b3a1c19" address="unix:///run/containerd/s/94487fa71881b3f88b631e81721f94f6595f499e0614d3ff7129b9e6ec5cf46f" namespace=k8s.io protocol=ttrpc version=3 May 27 03:55:48.295028 systemd[1]: Started cri-containerd-0a3295a78438c57d92949c6315961e477bd8109a29fd9d54eb9fd7d479bc4264.scope - libcontainer container 0a3295a78438c57d92949c6315961e477bd8109a29fd9d54eb9fd7d479bc4264. May 27 03:55:48.312783 systemd[1]: Started cri-containerd-b14bc9e489a23b8bef7fabedf9d5776f0c4f152502e54f40db73492f6b3a1c19.scope - libcontainer container b14bc9e489a23b8bef7fabedf9d5776f0c4f152502e54f40db73492f6b3a1c19. May 27 03:55:48.388931 containerd[1541]: time="2025-05-27T03:55:48.388747698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9thhk,Uid:c6d6dd4f-e6e0-41ba-8d4b-b78526c66db9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a3295a78438c57d92949c6315961e477bd8109a29fd9d54eb9fd7d479bc4264\"" May 27 03:55:48.390140 kubelet[2719]: E0527 03:55:48.390069 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:48.394120 containerd[1541]: time="2025-05-27T03:55:48.394052322Z" level=info msg="CreateContainer within sandbox \"0a3295a78438c57d92949c6315961e477bd8109a29fd9d54eb9fd7d479bc4264\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:55:48.409381 containerd[1541]: time="2025-05-27T03:55:48.409340651Z" level=info msg="Container ade0e4f5900aa3faa7a1fd657a75346c89658d60a7bc5c730b8113630babac72: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:48.415861 containerd[1541]: time="2025-05-27T03:55:48.415828476Z" level=info msg="CreateContainer within sandbox \"0a3295a78438c57d92949c6315961e477bd8109a29fd9d54eb9fd7d479bc4264\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ade0e4f5900aa3faa7a1fd657a75346c89658d60a7bc5c730b8113630babac72\"" May 27 03:55:48.417165 containerd[1541]: time="2025-05-27T03:55:48.416936793Z" level=info msg="StartContainer for \"ade0e4f5900aa3faa7a1fd657a75346c89658d60a7bc5c730b8113630babac72\"" May 27 03:55:48.419170 containerd[1541]: time="2025-05-27T03:55:48.419117741Z" level=info msg="connecting to shim ade0e4f5900aa3faa7a1fd657a75346c89658d60a7bc5c730b8113630babac72" address="unix:///run/containerd/s/39b96ad2f4b346ae842261a30bf889902df5283fa15999de85a9ff64e98d8f5b" protocol=ttrpc version=3 May 27 03:55:48.424483 containerd[1541]: time="2025-05-27T03:55:48.424451680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dxzzm,Uid:7ef678ff-1001-4cf0-8ff6-e696343f74f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b14bc9e489a23b8bef7fabedf9d5776f0c4f152502e54f40db73492f6b3a1c19\"" May 27 03:55:48.425664 kubelet[2719]: E0527 03:55:48.425624 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:48.432828 containerd[1541]: time="2025-05-27T03:55:48.432719251Z" level=info msg="CreateContainer within sandbox \"b14bc9e489a23b8bef7fabedf9d5776f0c4f152502e54f40db73492f6b3a1c19\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:55:48.441444 containerd[1541]: time="2025-05-27T03:55:48.440982041Z" level=info msg="Container 157f49776630a261d17aea680e02c9eb4b9a346e49d7378ffb534cc841f46234: CDI devices from CRI Config.CDIDevices: []" May 27 03:55:48.452103 systemd[1]: Started cri-containerd-ade0e4f5900aa3faa7a1fd657a75346c89658d60a7bc5c730b8113630babac72.scope - libcontainer container ade0e4f5900aa3faa7a1fd657a75346c89658d60a7bc5c730b8113630babac72. May 27 03:55:48.458383 containerd[1541]: time="2025-05-27T03:55:48.458354162Z" level=info msg="CreateContainer within sandbox \"b14bc9e489a23b8bef7fabedf9d5776f0c4f152502e54f40db73492f6b3a1c19\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"157f49776630a261d17aea680e02c9eb4b9a346e49d7378ffb534cc841f46234\"" May 27 03:55:48.461097 containerd[1541]: time="2025-05-27T03:55:48.461048741Z" level=info msg="StartContainer for \"157f49776630a261d17aea680e02c9eb4b9a346e49d7378ffb534cc841f46234\"" May 27 03:55:48.463476 containerd[1541]: time="2025-05-27T03:55:48.463222287Z" level=info msg="connecting to shim 157f49776630a261d17aea680e02c9eb4b9a346e49d7378ffb534cc841f46234" address="unix:///run/containerd/s/94487fa71881b3f88b631e81721f94f6595f499e0614d3ff7129b9e6ec5cf46f" protocol=ttrpc version=3 May 27 03:55:48.486055 systemd[1]: Started cri-containerd-157f49776630a261d17aea680e02c9eb4b9a346e49d7378ffb534cc841f46234.scope - libcontainer container 157f49776630a261d17aea680e02c9eb4b9a346e49d7378ffb534cc841f46234. May 27 03:55:48.515792 containerd[1541]: time="2025-05-27T03:55:48.515449539Z" level=info msg="StartContainer for \"ade0e4f5900aa3faa7a1fd657a75346c89658d60a7bc5c730b8113630babac72\" returns successfully" May 27 03:55:48.554900 containerd[1541]: time="2025-05-27T03:55:48.554222707Z" level=info msg="StartContainer for \"157f49776630a261d17aea680e02c9eb4b9a346e49d7378ffb534cc841f46234\" returns successfully" May 27 03:55:49.218911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1225298852.mount: Deactivated successfully. May 27 03:55:49.241979 kubelet[2719]: E0527 03:55:49.241530 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:49.245335 kubelet[2719]: E0527 03:55:49.245299 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:49.271518 kubelet[2719]: I0527 03:55:49.271459 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9thhk" podStartSLOduration=18.271439542 podStartE2EDuration="18.271439542s" podCreationTimestamp="2025-05-27 03:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:55:49.270154283 +0000 UTC m=+24.281806006" watchObservedRunningTime="2025-05-27 03:55:49.271439542 +0000 UTC m=+24.283091245" May 27 03:55:49.271662 kubelet[2719]: I0527 03:55:49.271538 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dxzzm" podStartSLOduration=18.271534018 podStartE2EDuration="18.271534018s" podCreationTimestamp="2025-05-27 03:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:55:49.255793748 +0000 UTC m=+24.267445461" watchObservedRunningTime="2025-05-27 03:55:49.271534018 +0000 UTC m=+24.283185731" May 27 03:55:50.247127 kubelet[2719]: E0527 03:55:50.247085 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:50.247524 kubelet[2719]: E0527 03:55:50.247085 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:51.249145 kubelet[2719]: E0527 03:55:51.249064 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:55:51.249145 kubelet[2719]: E0527 03:55:51.249083 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:56:49.083948 kubelet[2719]: E0527 03:56:49.083078 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:56:50.082665 kubelet[2719]: E0527 03:56:50.082633 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:56:53.083947 kubelet[2719]: E0527 03:56:53.083336 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:57:05.083384 kubelet[2719]: E0527 03:57:05.082827 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:57:06.082446 kubelet[2719]: E0527 03:57:06.082411 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:57:09.083403 kubelet[2719]: E0527 03:57:09.082858 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:57:16.083196 kubelet[2719]: E0527 03:57:16.082761 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:57:16.083196 kubelet[2719]: E0527 03:57:16.083052 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:57:27.103430 systemd[1]: Started sshd@7-172.234.212.30:22-139.178.68.195:50582.service - OpenSSH per-connection server daemon (139.178.68.195:50582). May 27 03:57:27.434901 sshd[4050]: Accepted publickey for core from 139.178.68.195 port 50582 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:27.436010 sshd-session[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:27.440639 systemd-logind[1514]: New session 8 of user core. May 27 03:57:27.447029 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:57:27.771336 sshd[4052]: Connection closed by 139.178.68.195 port 50582 May 27 03:57:27.772114 sshd-session[4050]: pam_unix(sshd:session): session closed for user core May 27 03:57:27.776322 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. May 27 03:57:27.776808 systemd[1]: sshd@7-172.234.212.30:22-139.178.68.195:50582.service: Deactivated successfully. May 27 03:57:27.779645 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:57:27.781604 systemd-logind[1514]: Removed session 8. May 27 03:57:32.842069 systemd[1]: Started sshd@8-172.234.212.30:22-139.178.68.195:50588.service - OpenSSH per-connection server daemon (139.178.68.195:50588). May 27 03:57:33.184666 sshd[4072]: Accepted publickey for core from 139.178.68.195 port 50588 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:33.186325 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:33.192561 systemd-logind[1514]: New session 9 of user core. May 27 03:57:33.198032 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:57:33.508687 sshd[4074]: Connection closed by 139.178.68.195 port 50588 May 27 03:57:33.509588 sshd-session[4072]: pam_unix(sshd:session): session closed for user core May 27 03:57:33.514499 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. May 27 03:57:33.515190 systemd[1]: sshd@8-172.234.212.30:22-139.178.68.195:50588.service: Deactivated successfully. May 27 03:57:33.518265 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:57:33.520609 systemd-logind[1514]: Removed session 9. May 27 03:57:38.568352 systemd[1]: Started sshd@9-172.234.212.30:22-139.178.68.195:46254.service - OpenSSH per-connection server daemon (139.178.68.195:46254). May 27 03:57:38.904237 sshd[4087]: Accepted publickey for core from 139.178.68.195 port 46254 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:38.905389 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:38.910996 systemd-logind[1514]: New session 10 of user core. May 27 03:57:38.915167 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:57:39.200617 sshd[4089]: Connection closed by 139.178.68.195 port 46254 May 27 03:57:39.201175 sshd-session[4087]: pam_unix(sshd:session): session closed for user core May 27 03:57:39.205127 systemd[1]: sshd@9-172.234.212.30:22-139.178.68.195:46254.service: Deactivated successfully. May 27 03:57:39.207101 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:57:39.207951 systemd-logind[1514]: Session 10 logged out. Waiting for processes to exit. May 27 03:57:39.209429 systemd-logind[1514]: Removed session 10. May 27 03:57:39.264848 systemd[1]: Started sshd@10-172.234.212.30:22-139.178.68.195:46266.service - OpenSSH per-connection server daemon (139.178.68.195:46266). May 27 03:57:39.596613 sshd[4102]: Accepted publickey for core from 139.178.68.195 port 46266 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:39.597690 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:39.601782 systemd-logind[1514]: New session 11 of user core. May 27 03:57:39.605998 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:57:39.930007 sshd[4104]: Connection closed by 139.178.68.195 port 46266 May 27 03:57:39.930633 sshd-session[4102]: pam_unix(sshd:session): session closed for user core May 27 03:57:39.935396 systemd-logind[1514]: Session 11 logged out. Waiting for processes to exit. May 27 03:57:39.936017 systemd[1]: sshd@10-172.234.212.30:22-139.178.68.195:46266.service: Deactivated successfully. May 27 03:57:39.938773 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:57:39.940514 systemd-logind[1514]: Removed session 11. May 27 03:57:39.994042 systemd[1]: Started sshd@11-172.234.212.30:22-139.178.68.195:46276.service - OpenSSH per-connection server daemon (139.178.68.195:46276). May 27 03:57:40.331981 sshd[4114]: Accepted publickey for core from 139.178.68.195 port 46276 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:40.333312 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:40.338258 systemd-logind[1514]: New session 12 of user core. May 27 03:57:40.345989 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:57:40.633276 sshd[4116]: Connection closed by 139.178.68.195 port 46276 May 27 03:57:40.634907 sshd-session[4114]: pam_unix(sshd:session): session closed for user core May 27 03:57:40.639342 systemd[1]: sshd@11-172.234.212.30:22-139.178.68.195:46276.service: Deactivated successfully. May 27 03:57:40.641511 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:57:40.642493 systemd-logind[1514]: Session 12 logged out. Waiting for processes to exit. May 27 03:57:40.644251 systemd-logind[1514]: Removed session 12. May 27 03:57:45.697741 systemd[1]: Started sshd@12-172.234.212.30:22-139.178.68.195:53202.service - OpenSSH per-connection server daemon (139.178.68.195:53202). May 27 03:57:46.045559 sshd[4127]: Accepted publickey for core from 139.178.68.195 port 53202 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:46.047554 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:46.053326 systemd-logind[1514]: New session 13 of user core. May 27 03:57:46.058031 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:57:46.352920 sshd[4129]: Connection closed by 139.178.68.195 port 53202 May 27 03:57:46.353829 sshd-session[4127]: pam_unix(sshd:session): session closed for user core May 27 03:57:46.358095 systemd-logind[1514]: Session 13 logged out. Waiting for processes to exit. May 27 03:57:46.358696 systemd[1]: sshd@12-172.234.212.30:22-139.178.68.195:53202.service: Deactivated successfully. May 27 03:57:46.360814 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:57:46.362860 systemd-logind[1514]: Removed session 13. May 27 03:57:51.415642 systemd[1]: Started sshd@13-172.234.212.30:22-139.178.68.195:53204.service - OpenSSH per-connection server daemon (139.178.68.195:53204). May 27 03:57:51.772633 sshd[4141]: Accepted publickey for core from 139.178.68.195 port 53204 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:51.774051 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:51.779628 systemd-logind[1514]: New session 14 of user core. May 27 03:57:51.784228 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:57:52.081640 sshd[4143]: Connection closed by 139.178.68.195 port 53204 May 27 03:57:52.082675 sshd-session[4141]: pam_unix(sshd:session): session closed for user core May 27 03:57:52.087441 systemd-logind[1514]: Session 14 logged out. Waiting for processes to exit. May 27 03:57:52.088468 systemd[1]: sshd@13-172.234.212.30:22-139.178.68.195:53204.service: Deactivated successfully. May 27 03:57:52.091385 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:57:52.093268 systemd-logind[1514]: Removed session 14. May 27 03:57:52.144604 systemd[1]: Started sshd@14-172.234.212.30:22-139.178.68.195:53210.service - OpenSSH per-connection server daemon (139.178.68.195:53210). May 27 03:57:52.494971 sshd[4155]: Accepted publickey for core from 139.178.68.195 port 53210 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:52.496417 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:52.501987 systemd-logind[1514]: New session 15 of user core. May 27 03:57:52.507226 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:57:52.819858 sshd[4157]: Connection closed by 139.178.68.195 port 53210 May 27 03:57:52.820667 sshd-session[4155]: pam_unix(sshd:session): session closed for user core May 27 03:57:52.824800 systemd-logind[1514]: Session 15 logged out. Waiting for processes to exit. May 27 03:57:52.825302 systemd[1]: sshd@14-172.234.212.30:22-139.178.68.195:53210.service: Deactivated successfully. May 27 03:57:52.828314 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:57:52.830079 systemd-logind[1514]: Removed session 15. May 27 03:57:52.880703 systemd[1]: Started sshd@15-172.234.212.30:22-139.178.68.195:53226.service - OpenSSH per-connection server daemon (139.178.68.195:53226). May 27 03:57:53.225310 sshd[4167]: Accepted publickey for core from 139.178.68.195 port 53226 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:53.226788 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:53.232043 systemd-logind[1514]: New session 16 of user core. May 27 03:57:53.241029 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:57:54.089460 sshd[4169]: Connection closed by 139.178.68.195 port 53226 May 27 03:57:54.090044 sshd-session[4167]: pam_unix(sshd:session): session closed for user core May 27 03:57:54.095065 systemd-logind[1514]: Session 16 logged out. Waiting for processes to exit. May 27 03:57:54.096729 systemd[1]: sshd@15-172.234.212.30:22-139.178.68.195:53226.service: Deactivated successfully. May 27 03:57:54.100255 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:57:54.102555 systemd-logind[1514]: Removed session 16. May 27 03:57:54.152005 systemd[1]: Started sshd@16-172.234.212.30:22-139.178.68.195:37722.service - OpenSSH per-connection server daemon (139.178.68.195:37722). May 27 03:57:54.503124 sshd[4186]: Accepted publickey for core from 139.178.68.195 port 37722 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:54.504749 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:54.510814 systemd-logind[1514]: New session 17 of user core. May 27 03:57:54.518004 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:57:54.922408 sshd[4188]: Connection closed by 139.178.68.195 port 37722 May 27 03:57:54.922981 sshd-session[4186]: pam_unix(sshd:session): session closed for user core May 27 03:57:54.926551 systemd[1]: sshd@16-172.234.212.30:22-139.178.68.195:37722.service: Deactivated successfully. May 27 03:57:54.928686 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:57:54.930658 systemd-logind[1514]: Session 17 logged out. Waiting for processes to exit. May 27 03:57:54.932739 systemd-logind[1514]: Removed session 17. May 27 03:57:54.985003 systemd[1]: Started sshd@17-172.234.212.30:22-139.178.68.195:37730.service - OpenSSH per-connection server daemon (139.178.68.195:37730). May 27 03:57:55.326993 sshd[4198]: Accepted publickey for core from 139.178.68.195 port 37730 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:57:55.327999 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:57:55.332893 systemd-logind[1514]: New session 18 of user core. May 27 03:57:55.340008 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:57:55.619385 sshd[4200]: Connection closed by 139.178.68.195 port 37730 May 27 03:57:55.620166 sshd-session[4198]: pam_unix(sshd:session): session closed for user core May 27 03:57:55.624159 systemd-logind[1514]: Session 18 logged out. Waiting for processes to exit. May 27 03:57:55.625053 systemd[1]: sshd@17-172.234.212.30:22-139.178.68.195:37730.service: Deactivated successfully. May 27 03:57:55.627037 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:57:55.628558 systemd-logind[1514]: Removed session 18. May 27 03:58:00.688114 systemd[1]: Started sshd@18-172.234.212.30:22-139.178.68.195:37736.service - OpenSSH per-connection server daemon (139.178.68.195:37736). May 27 03:58:01.034010 sshd[4214]: Accepted publickey for core from 139.178.68.195 port 37736 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:01.035701 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:01.042187 systemd-logind[1514]: New session 19 of user core. May 27 03:58:01.047029 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:58:01.340763 sshd[4216]: Connection closed by 139.178.68.195 port 37736 May 27 03:58:01.341541 sshd-session[4214]: pam_unix(sshd:session): session closed for user core May 27 03:58:01.345292 systemd[1]: sshd@18-172.234.212.30:22-139.178.68.195:37736.service: Deactivated successfully. May 27 03:58:01.347565 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:58:01.348625 systemd-logind[1514]: Session 19 logged out. Waiting for processes to exit. May 27 03:58:01.350142 systemd-logind[1514]: Removed session 19. May 27 03:58:02.082657 kubelet[2719]: E0527 03:58:02.082620 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:58:06.406232 systemd[1]: Started sshd@19-172.234.212.30:22-139.178.68.195:36128.service - OpenSSH per-connection server daemon (139.178.68.195:36128). May 27 03:58:06.747943 sshd[4230]: Accepted publickey for core from 139.178.68.195 port 36128 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:06.748576 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:06.756032 systemd-logind[1514]: New session 20 of user core. May 27 03:58:06.766330 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:58:07.059115 sshd[4232]: Connection closed by 139.178.68.195 port 36128 May 27 03:58:07.060075 sshd-session[4230]: pam_unix(sshd:session): session closed for user core May 27 03:58:07.065213 systemd-logind[1514]: Session 20 logged out. Waiting for processes to exit. May 27 03:58:07.066415 systemd[1]: sshd@19-172.234.212.30:22-139.178.68.195:36128.service: Deactivated successfully. May 27 03:58:07.068742 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:58:07.071532 systemd-logind[1514]: Removed session 20. May 27 03:58:08.083148 kubelet[2719]: E0527 03:58:08.083084 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:58:10.802639 update_engine[1518]: I20250527 03:58:10.802578 1518 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 27 03:58:10.802639 update_engine[1518]: I20250527 03:58:10.802628 1518 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 27 03:58:10.803241 update_engine[1518]: I20250527 03:58:10.802999 1518 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 27 03:58:10.803514 update_engine[1518]: I20250527 03:58:10.803484 1518 omaha_request_params.cc:62] Current group set to alpha May 27 03:58:10.803781 update_engine[1518]: I20250527 03:58:10.803591 1518 update_attempter.cc:499] Already updated boot flags. Skipping. May 27 03:58:10.803781 update_engine[1518]: I20250527 03:58:10.803604 1518 update_attempter.cc:643] Scheduling an action processor start. May 27 03:58:10.803781 update_engine[1518]: I20250527 03:58:10.803645 1518 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 03:58:10.803781 update_engine[1518]: I20250527 03:58:10.803671 1518 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 27 03:58:10.803781 update_engine[1518]: I20250527 03:58:10.803730 1518 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 03:58:10.803781 update_engine[1518]: I20250527 03:58:10.803742 1518 omaha_request_action.cc:272] Request: May 27 03:58:10.803781 update_engine[1518]: May 27 03:58:10.803781 update_engine[1518]: May 27 03:58:10.803781 update_engine[1518]: May 27 03:58:10.803781 update_engine[1518]: May 27 03:58:10.803781 update_engine[1518]: May 27 03:58:10.803781 update_engine[1518]: May 27 03:58:10.803781 update_engine[1518]: May 27 03:58:10.803781 update_engine[1518]: May 27 03:58:10.803781 update_engine[1518]: I20250527 03:58:10.803750 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:58:10.804287 locksmithd[1564]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 27 03:58:10.804892 update_engine[1518]: I20250527 03:58:10.804847 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:58:10.805392 update_engine[1518]: I20250527 03:58:10.805333 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:58:10.935317 update_engine[1518]: E20250527 03:58:10.935230 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:58:10.935446 update_engine[1518]: I20250527 03:58:10.935346 1518 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 27 03:58:12.083229 kubelet[2719]: E0527 03:58:12.083079 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:58:12.125695 systemd[1]: Started sshd@20-172.234.212.30:22-139.178.68.195:36144.service - OpenSSH per-connection server daemon (139.178.68.195:36144). May 27 03:58:12.477401 sshd[4243]: Accepted publickey for core from 139.178.68.195 port 36144 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:12.479242 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:12.484110 systemd-logind[1514]: New session 21 of user core. May 27 03:58:12.490998 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:58:12.796457 sshd[4245]: Connection closed by 139.178.68.195 port 36144 May 27 03:58:12.797389 sshd-session[4243]: pam_unix(sshd:session): session closed for user core May 27 03:58:12.802301 systemd[1]: sshd@20-172.234.212.30:22-139.178.68.195:36144.service: Deactivated successfully. May 27 03:58:12.802378 systemd-logind[1514]: Session 21 logged out. Waiting for processes to exit. May 27 03:58:12.805235 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:58:12.809074 systemd-logind[1514]: Removed session 21. May 27 03:58:17.860717 systemd[1]: Started sshd@21-172.234.212.30:22-139.178.68.195:40366.service - OpenSSH per-connection server daemon (139.178.68.195:40366). May 27 03:58:18.216042 sshd[4257]: Accepted publickey for core from 139.178.68.195 port 40366 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:18.217578 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:18.222524 systemd-logind[1514]: New session 22 of user core. May 27 03:58:18.226045 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:58:18.516089 sshd[4261]: Connection closed by 139.178.68.195 port 40366 May 27 03:58:18.516935 sshd-session[4257]: pam_unix(sshd:session): session closed for user core May 27 03:58:18.521045 systemd-logind[1514]: Session 22 logged out. Waiting for processes to exit. May 27 03:58:18.521717 systemd[1]: sshd@21-172.234.212.30:22-139.178.68.195:40366.service: Deactivated successfully. May 27 03:58:18.524682 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:58:18.527036 systemd-logind[1514]: Removed session 22. May 27 03:58:19.084016 kubelet[2719]: E0527 03:58:19.083237 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:58:20.801236 update_engine[1518]: I20250527 03:58:20.801148 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:58:20.801683 update_engine[1518]: I20250527 03:58:20.801447 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:58:20.801748 update_engine[1518]: I20250527 03:58:20.801708 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:58:20.846916 update_engine[1518]: E20250527 03:58:20.846776 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:58:20.847061 update_engine[1518]: I20250527 03:58:20.847023 1518 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 27 03:58:22.082406 kubelet[2719]: E0527 03:58:22.082370 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:58:23.579177 systemd[1]: Started sshd@22-172.234.212.30:22-139.178.68.195:43404.service - OpenSSH per-connection server daemon (139.178.68.195:43404). May 27 03:58:23.918781 sshd[4273]: Accepted publickey for core from 139.178.68.195 port 43404 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:23.920619 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:23.926082 systemd-logind[1514]: New session 23 of user core. May 27 03:58:23.931014 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:58:24.083261 kubelet[2719]: E0527 03:58:24.083236 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:58:24.228312 sshd[4275]: Connection closed by 139.178.68.195 port 43404 May 27 03:58:24.228829 sshd-session[4273]: pam_unix(sshd:session): session closed for user core May 27 03:58:24.233071 systemd-logind[1514]: Session 23 logged out. Waiting for processes to exit. May 27 03:58:24.233975 systemd[1]: sshd@22-172.234.212.30:22-139.178.68.195:43404.service: Deactivated successfully. May 27 03:58:24.236178 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:58:24.237921 systemd-logind[1514]: Removed session 23. May 27 03:58:29.288409 systemd[1]: Started sshd@23-172.234.212.30:22-139.178.68.195:43416.service - OpenSSH per-connection server daemon (139.178.68.195:43416). May 27 03:58:29.629029 sshd[4289]: Accepted publickey for core from 139.178.68.195 port 43416 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:29.630526 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:29.634730 systemd-logind[1514]: New session 24 of user core. May 27 03:58:29.641001 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:58:29.925830 sshd[4291]: Connection closed by 139.178.68.195 port 43416 May 27 03:58:29.926347 sshd-session[4289]: pam_unix(sshd:session): session closed for user core May 27 03:58:29.930545 systemd-logind[1514]: Session 24 logged out. Waiting for processes to exit. May 27 03:58:29.930868 systemd[1]: sshd@23-172.234.212.30:22-139.178.68.195:43416.service: Deactivated successfully. May 27 03:58:29.932914 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:58:29.934700 systemd-logind[1514]: Removed session 24. May 27 03:58:30.799891 update_engine[1518]: I20250527 03:58:30.799799 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:58:30.800310 update_engine[1518]: I20250527 03:58:30.800105 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:58:30.800400 update_engine[1518]: I20250527 03:58:30.800365 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:58:30.801095 update_engine[1518]: E20250527 03:58:30.801033 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:58:30.801201 update_engine[1518]: I20250527 03:58:30.801117 1518 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 27 03:58:32.083286 kubelet[2719]: E0527 03:58:32.083254 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:58:34.991964 systemd[1]: Started sshd@24-172.234.212.30:22-139.178.68.195:59844.service - OpenSSH per-connection server daemon (139.178.68.195:59844). May 27 03:58:35.332501 sshd[4305]: Accepted publickey for core from 139.178.68.195 port 59844 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:35.333718 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:35.337898 systemd-logind[1514]: New session 25 of user core. May 27 03:58:35.341037 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:58:35.636413 sshd[4307]: Connection closed by 139.178.68.195 port 59844 May 27 03:58:35.637123 sshd-session[4305]: pam_unix(sshd:session): session closed for user core May 27 03:58:35.640747 systemd-logind[1514]: Session 25 logged out. Waiting for processes to exit. May 27 03:58:35.641522 systemd[1]: sshd@24-172.234.212.30:22-139.178.68.195:59844.service: Deactivated successfully. May 27 03:58:35.643726 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:58:35.645631 systemd-logind[1514]: Removed session 25. May 27 03:58:40.701730 systemd[1]: Started sshd@25-172.234.212.30:22-139.178.68.195:59848.service - OpenSSH per-connection server daemon (139.178.68.195:59848). May 27 03:58:40.801206 update_engine[1518]: I20250527 03:58:40.801143 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:58:40.801605 update_engine[1518]: I20250527 03:58:40.801411 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:58:40.801688 update_engine[1518]: I20250527 03:58:40.801654 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:58:40.802281 update_engine[1518]: E20250527 03:58:40.802253 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:58:40.802335 update_engine[1518]: I20250527 03:58:40.802294 1518 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 03:58:40.802335 update_engine[1518]: I20250527 03:58:40.802304 1518 omaha_request_action.cc:617] Omaha request response: May 27 03:58:40.802422 update_engine[1518]: E20250527 03:58:40.802377 1518 omaha_request_action.cc:636] Omaha request network transfer failed. May 27 03:58:40.802422 update_engine[1518]: I20250527 03:58:40.802409 1518 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 27 03:58:40.802422 update_engine[1518]: I20250527 03:58:40.802415 1518 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 03:58:40.802422 update_engine[1518]: I20250527 03:58:40.802420 1518 update_attempter.cc:306] Processing Done. May 27 03:58:40.802516 update_engine[1518]: E20250527 03:58:40.802434 1518 update_attempter.cc:619] Update failed. May 27 03:58:40.802516 update_engine[1518]: I20250527 03:58:40.802439 1518 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 27 03:58:40.802516 update_engine[1518]: I20250527 03:58:40.802444 1518 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 27 03:58:40.802516 update_engine[1518]: I20250527 03:58:40.802450 1518 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 27 03:58:40.802516 update_engine[1518]: I20250527 03:58:40.802513 1518 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 03:58:40.802620 update_engine[1518]: I20250527 03:58:40.802531 1518 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 03:58:40.802620 update_engine[1518]: I20250527 03:58:40.802537 1518 omaha_request_action.cc:272] Request: May 27 03:58:40.802620 update_engine[1518]: May 27 03:58:40.802620 update_engine[1518]: May 27 03:58:40.802620 update_engine[1518]: May 27 03:58:40.802620 update_engine[1518]: May 27 03:58:40.802620 update_engine[1518]: May 27 03:58:40.802620 update_engine[1518]: May 27 03:58:40.802620 update_engine[1518]: I20250527 03:58:40.802543 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 03:58:40.802788 update_engine[1518]: I20250527 03:58:40.802693 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 03:58:40.802988 update_engine[1518]: I20250527 03:58:40.802866 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 03:58:40.803156 locksmithd[1564]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 27 03:58:40.803776 update_engine[1518]: E20250527 03:58:40.803748 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 03:58:40.803823 update_engine[1518]: I20250527 03:58:40.803791 1518 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 27 03:58:40.803823 update_engine[1518]: I20250527 03:58:40.803799 1518 omaha_request_action.cc:617] Omaha request response: May 27 03:58:40.803823 update_engine[1518]: I20250527 03:58:40.803805 1518 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 03:58:40.803823 update_engine[1518]: I20250527 03:58:40.803810 1518 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 27 03:58:40.803823 update_engine[1518]: I20250527 03:58:40.803815 1518 update_attempter.cc:306] Processing Done. May 27 03:58:40.803823 update_engine[1518]: I20250527 03:58:40.803821 1518 update_attempter.cc:310] Error event sent. May 27 03:58:40.803974 update_engine[1518]: I20250527 03:58:40.803829 1518 update_check_scheduler.cc:74] Next update check in 43m17s May 27 03:58:40.804102 locksmithd[1564]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 27 03:58:41.056203 sshd[4318]: Accepted publickey for core from 139.178.68.195 port 59848 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:41.057529 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:41.063664 systemd-logind[1514]: New session 26 of user core. May 27 03:58:41.073013 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 03:58:41.083092 kubelet[2719]: E0527 03:58:41.082655 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:58:41.366765 sshd[4320]: Connection closed by 139.178.68.195 port 59848 May 27 03:58:41.367702 sshd-session[4318]: pam_unix(sshd:session): session closed for user core May 27 03:58:41.372494 systemd[1]: sshd@25-172.234.212.30:22-139.178.68.195:59848.service: Deactivated successfully. May 27 03:58:41.375645 systemd[1]: session-26.scope: Deactivated successfully. May 27 03:58:41.376950 systemd-logind[1514]: Session 26 logged out. Waiting for processes to exit. May 27 03:58:41.379050 systemd-logind[1514]: Removed session 26. May 27 03:58:46.429694 systemd[1]: Started sshd@26-172.234.212.30:22-139.178.68.195:50232.service - OpenSSH per-connection server daemon (139.178.68.195:50232). May 27 03:58:46.762114 sshd[4332]: Accepted publickey for core from 139.178.68.195 port 50232 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:46.764144 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:46.771210 systemd-logind[1514]: New session 27 of user core. May 27 03:58:46.778639 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 03:58:47.062915 sshd[4334]: Connection closed by 139.178.68.195 port 50232 May 27 03:58:47.063662 sshd-session[4332]: pam_unix(sshd:session): session closed for user core May 27 03:58:47.068401 systemd-logind[1514]: Session 27 logged out. Waiting for processes to exit. May 27 03:58:47.069174 systemd[1]: sshd@26-172.234.212.30:22-139.178.68.195:50232.service: Deactivated successfully. May 27 03:58:47.071484 systemd[1]: session-27.scope: Deactivated successfully. May 27 03:58:47.073815 systemd-logind[1514]: Removed session 27. May 27 03:58:52.131516 systemd[1]: Started sshd@27-172.234.212.30:22-139.178.68.195:50238.service - OpenSSH per-connection server daemon (139.178.68.195:50238). May 27 03:58:52.476196 sshd[4345]: Accepted publickey for core from 139.178.68.195 port 50238 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:52.477621 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:52.481713 systemd-logind[1514]: New session 28 of user core. May 27 03:58:52.488018 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 03:58:52.778907 sshd[4347]: Connection closed by 139.178.68.195 port 50238 May 27 03:58:52.780570 sshd-session[4345]: pam_unix(sshd:session): session closed for user core May 27 03:58:52.784969 systemd[1]: sshd@27-172.234.212.30:22-139.178.68.195:50238.service: Deactivated successfully. May 27 03:58:52.787277 systemd[1]: session-28.scope: Deactivated successfully. May 27 03:58:52.788619 systemd-logind[1514]: Session 28 logged out. Waiting for processes to exit. May 27 03:58:52.789997 systemd-logind[1514]: Removed session 28. May 27 03:58:57.838851 systemd[1]: Started sshd@28-172.234.212.30:22-139.178.68.195:42978.service - OpenSSH per-connection server daemon (139.178.68.195:42978). May 27 03:58:58.166840 sshd[4360]: Accepted publickey for core from 139.178.68.195 port 42978 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:58:58.168234 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:58:58.173764 systemd-logind[1514]: New session 29 of user core. May 27 03:58:58.180026 systemd[1]: Started session-29.scope - Session 29 of User core. May 27 03:58:58.466540 sshd[4362]: Connection closed by 139.178.68.195 port 42978 May 27 03:58:58.467103 sshd-session[4360]: pam_unix(sshd:session): session closed for user core May 27 03:58:58.471669 systemd-logind[1514]: Session 29 logged out. Waiting for processes to exit. May 27 03:58:58.472251 systemd[1]: sshd@28-172.234.212.30:22-139.178.68.195:42978.service: Deactivated successfully. May 27 03:58:58.474233 systemd[1]: session-29.scope: Deactivated successfully. May 27 03:58:58.476726 systemd-logind[1514]: Removed session 29. May 27 03:59:03.530772 systemd[1]: Started sshd@29-172.234.212.30:22-139.178.68.195:42986.service - OpenSSH per-connection server daemon (139.178.68.195:42986). May 27 03:59:03.880657 sshd[4376]: Accepted publickey for core from 139.178.68.195 port 42986 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:03.882658 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:03.888239 systemd-logind[1514]: New session 30 of user core. May 27 03:59:03.896191 systemd[1]: Started session-30.scope - Session 30 of User core. May 27 03:59:04.179134 sshd[4380]: Connection closed by 139.178.68.195 port 42986 May 27 03:59:04.179712 sshd-session[4376]: pam_unix(sshd:session): session closed for user core May 27 03:59:04.184377 systemd[1]: sshd@29-172.234.212.30:22-139.178.68.195:42986.service: Deactivated successfully. May 27 03:59:04.186530 systemd[1]: session-30.scope: Deactivated successfully. May 27 03:59:04.187944 systemd-logind[1514]: Session 30 logged out. Waiting for processes to exit. May 27 03:59:04.189445 systemd-logind[1514]: Removed session 30. May 27 03:59:09.237208 systemd[1]: Started sshd@30-172.234.212.30:22-139.178.68.195:54792.service - OpenSSH per-connection server daemon (139.178.68.195:54792). May 27 03:59:09.564463 sshd[4392]: Accepted publickey for core from 139.178.68.195 port 54792 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:09.566242 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:09.571957 systemd-logind[1514]: New session 31 of user core. May 27 03:59:09.577993 systemd[1]: Started session-31.scope - Session 31 of User core. May 27 03:59:09.861406 sshd[4394]: Connection closed by 139.178.68.195 port 54792 May 27 03:59:09.862239 sshd-session[4392]: pam_unix(sshd:session): session closed for user core May 27 03:59:09.867588 systemd[1]: sshd@30-172.234.212.30:22-139.178.68.195:54792.service: Deactivated successfully. May 27 03:59:09.870149 systemd[1]: session-31.scope: Deactivated successfully. May 27 03:59:09.871167 systemd-logind[1514]: Session 31 logged out. Waiting for processes to exit. May 27 03:59:09.873371 systemd-logind[1514]: Removed session 31. May 27 03:59:10.082585 kubelet[2719]: E0527 03:59:10.082547 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:59:14.923226 systemd[1]: Started sshd@31-172.234.212.30:22-139.178.68.195:48468.service - OpenSSH per-connection server daemon (139.178.68.195:48468). May 27 03:59:15.262447 sshd[4405]: Accepted publickey for core from 139.178.68.195 port 48468 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:15.266443 sshd-session[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:15.272092 systemd-logind[1514]: New session 32 of user core. May 27 03:59:15.276017 systemd[1]: Started session-32.scope - Session 32 of User core. May 27 03:59:15.557496 sshd[4407]: Connection closed by 139.178.68.195 port 48468 May 27 03:59:15.558868 sshd-session[4405]: pam_unix(sshd:session): session closed for user core May 27 03:59:15.563121 systemd[1]: sshd@31-172.234.212.30:22-139.178.68.195:48468.service: Deactivated successfully. May 27 03:59:15.565489 systemd[1]: session-32.scope: Deactivated successfully. May 27 03:59:15.569210 systemd-logind[1514]: Session 32 logged out. Waiting for processes to exit. May 27 03:59:15.570705 systemd-logind[1514]: Removed session 32. May 27 03:59:20.621967 systemd[1]: Started sshd@32-172.234.212.30:22-139.178.68.195:48474.service - OpenSSH per-connection server daemon (139.178.68.195:48474). May 27 03:59:20.973748 sshd[4419]: Accepted publickey for core from 139.178.68.195 port 48474 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:20.975192 sshd-session[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:20.980738 systemd-logind[1514]: New session 33 of user core. May 27 03:59:20.991025 systemd[1]: Started session-33.scope - Session 33 of User core. May 27 03:59:21.272623 sshd[4421]: Connection closed by 139.178.68.195 port 48474 May 27 03:59:21.273477 sshd-session[4419]: pam_unix(sshd:session): session closed for user core May 27 03:59:21.277532 systemd[1]: sshd@32-172.234.212.30:22-139.178.68.195:48474.service: Deactivated successfully. May 27 03:59:21.279588 systemd[1]: session-33.scope: Deactivated successfully. May 27 03:59:21.281027 systemd-logind[1514]: Session 33 logged out. Waiting for processes to exit. May 27 03:59:21.282819 systemd-logind[1514]: Removed session 33. May 27 03:59:23.082897 kubelet[2719]: E0527 03:59:23.082440 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:59:26.333077 systemd[1]: Started sshd@33-172.234.212.30:22-139.178.68.195:41836.service - OpenSSH per-connection server daemon (139.178.68.195:41836). May 27 03:59:26.658625 sshd[4435]: Accepted publickey for core from 139.178.68.195 port 41836 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:26.660011 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:26.664933 systemd-logind[1514]: New session 34 of user core. May 27 03:59:26.673998 systemd[1]: Started session-34.scope - Session 34 of User core. May 27 03:59:26.956772 sshd[4437]: Connection closed by 139.178.68.195 port 41836 May 27 03:59:26.957362 sshd-session[4435]: pam_unix(sshd:session): session closed for user core May 27 03:59:26.961755 systemd[1]: sshd@33-172.234.212.30:22-139.178.68.195:41836.service: Deactivated successfully. May 27 03:59:26.962174 systemd-logind[1514]: Session 34 logged out. Waiting for processes to exit. May 27 03:59:26.964652 systemd[1]: session-34.scope: Deactivated successfully. May 27 03:59:26.966781 systemd-logind[1514]: Removed session 34. May 27 03:59:29.084130 kubelet[2719]: E0527 03:59:29.083139 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:59:30.083011 kubelet[2719]: E0527 03:59:30.082664 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:59:30.083011 kubelet[2719]: E0527 03:59:30.082905 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:59:32.022259 systemd[1]: Started sshd@34-172.234.212.30:22-139.178.68.195:41844.service - OpenSSH per-connection server daemon (139.178.68.195:41844). May 27 03:59:32.360382 sshd[4448]: Accepted publickey for core from 139.178.68.195 port 41844 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:32.361959 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:32.371554 systemd-logind[1514]: New session 35 of user core. May 27 03:59:32.379007 systemd[1]: Started session-35.scope - Session 35 of User core. May 27 03:59:32.659413 sshd[4452]: Connection closed by 139.178.68.195 port 41844 May 27 03:59:32.660338 sshd-session[4448]: pam_unix(sshd:session): session closed for user core May 27 03:59:32.664737 systemd[1]: sshd@34-172.234.212.30:22-139.178.68.195:41844.service: Deactivated successfully. May 27 03:59:32.667774 systemd[1]: session-35.scope: Deactivated successfully. May 27 03:59:32.669373 systemd-logind[1514]: Session 35 logged out. Waiting for processes to exit. May 27 03:59:32.670821 systemd-logind[1514]: Removed session 35. May 27 03:59:37.726904 systemd[1]: Started sshd@35-172.234.212.30:22-139.178.68.195:59428.service - OpenSSH per-connection server daemon (139.178.68.195:59428). May 27 03:59:38.063906 sshd[4464]: Accepted publickey for core from 139.178.68.195 port 59428 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:38.065224 sshd-session[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:38.070285 systemd-logind[1514]: New session 36 of user core. May 27 03:59:38.077041 systemd[1]: Started session-36.scope - Session 36 of User core. May 27 03:59:38.366648 sshd[4466]: Connection closed by 139.178.68.195 port 59428 May 27 03:59:38.367339 sshd-session[4464]: pam_unix(sshd:session): session closed for user core May 27 03:59:38.371856 systemd-logind[1514]: Session 36 logged out. Waiting for processes to exit. May 27 03:59:38.372628 systemd[1]: sshd@35-172.234.212.30:22-139.178.68.195:59428.service: Deactivated successfully. May 27 03:59:38.375311 systemd[1]: session-36.scope: Deactivated successfully. May 27 03:59:38.377079 systemd-logind[1514]: Removed session 36. May 27 03:59:43.083192 kubelet[2719]: E0527 03:59:43.082540 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:59:43.424022 systemd[1]: Started sshd@36-172.234.212.30:22-139.178.68.195:59438.service - OpenSSH per-connection server daemon (139.178.68.195:59438). May 27 03:59:43.778178 sshd[4478]: Accepted publickey for core from 139.178.68.195 port 59438 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:43.779707 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:43.785358 systemd-logind[1514]: New session 37 of user core. May 27 03:59:43.793015 systemd[1]: Started session-37.scope - Session 37 of User core. May 27 03:59:44.068173 sshd[4480]: Connection closed by 139.178.68.195 port 59438 May 27 03:59:44.068998 sshd-session[4478]: pam_unix(sshd:session): session closed for user core May 27 03:59:44.073040 systemd[1]: sshd@36-172.234.212.30:22-139.178.68.195:59438.service: Deactivated successfully. May 27 03:59:44.075387 systemd[1]: session-37.scope: Deactivated successfully. May 27 03:59:44.076313 systemd-logind[1514]: Session 37 logged out. Waiting for processes to exit. May 27 03:59:44.077798 systemd-logind[1514]: Removed session 37. May 27 03:59:49.139802 systemd[1]: Started sshd@37-172.234.212.30:22-139.178.68.195:41596.service - OpenSSH per-connection server daemon (139.178.68.195:41596). May 27 03:59:49.467454 sshd[4491]: Accepted publickey for core from 139.178.68.195 port 41596 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:49.468899 sshd-session[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:49.474117 systemd-logind[1514]: New session 38 of user core. May 27 03:59:49.477988 systemd[1]: Started session-38.scope - Session 38 of User core. May 27 03:59:49.767750 sshd[4493]: Connection closed by 139.178.68.195 port 41596 May 27 03:59:49.768664 sshd-session[4491]: pam_unix(sshd:session): session closed for user core May 27 03:59:49.771923 systemd[1]: sshd@37-172.234.212.30:22-139.178.68.195:41596.service: Deactivated successfully. May 27 03:59:49.774125 systemd[1]: session-38.scope: Deactivated successfully. May 27 03:59:49.777268 systemd-logind[1514]: Session 38 logged out. Waiting for processes to exit. May 27 03:59:49.778398 systemd-logind[1514]: Removed session 38. May 27 03:59:54.841736 systemd[1]: Started sshd@38-172.234.212.30:22-139.178.68.195:41342.service - OpenSSH per-connection server daemon (139.178.68.195:41342). May 27 03:59:55.177448 sshd[4506]: Accepted publickey for core from 139.178.68.195 port 41342 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 03:59:55.179424 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:59:55.188012 systemd-logind[1514]: New session 39 of user core. May 27 03:59:55.192026 systemd[1]: Started session-39.scope - Session 39 of User core. May 27 03:59:55.476123 sshd[4508]: Connection closed by 139.178.68.195 port 41342 May 27 03:59:55.477003 sshd-session[4506]: pam_unix(sshd:session): session closed for user core May 27 03:59:55.481325 systemd[1]: sshd@38-172.234.212.30:22-139.178.68.195:41342.service: Deactivated successfully. May 27 03:59:55.484003 systemd[1]: session-39.scope: Deactivated successfully. May 27 03:59:55.485192 systemd-logind[1514]: Session 39 logged out. Waiting for processes to exit. May 27 03:59:55.487329 systemd-logind[1514]: Removed session 39. May 27 03:59:56.082454 kubelet[2719]: E0527 03:59:56.082418 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 03:59:59.083744 kubelet[2719]: E0527 03:59:59.083677 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:00:00.541198 systemd[1]: Started sshd@39-172.234.212.30:22-139.178.68.195:41352.service - OpenSSH per-connection server daemon (139.178.68.195:41352). May 27 04:00:00.894564 sshd[4520]: Accepted publickey for core from 139.178.68.195 port 41352 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:00.896192 sshd-session[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:00.903608 systemd-logind[1514]: New session 40 of user core. May 27 04:00:00.913009 systemd[1]: Started session-40.scope - Session 40 of User core. May 27 04:00:01.205635 sshd[4522]: Connection closed by 139.178.68.195 port 41352 May 27 04:00:01.206594 sshd-session[4520]: pam_unix(sshd:session): session closed for user core May 27 04:00:01.212916 systemd[1]: sshd@39-172.234.212.30:22-139.178.68.195:41352.service: Deactivated successfully. May 27 04:00:01.216774 systemd[1]: session-40.scope: Deactivated successfully. May 27 04:00:01.218867 systemd-logind[1514]: Session 40 logged out. Waiting for processes to exit. May 27 04:00:01.220952 systemd-logind[1514]: Removed session 40. May 27 04:00:06.279423 systemd[1]: Started sshd@40-172.234.212.30:22-139.178.68.195:43668.service - OpenSSH per-connection server daemon (139.178.68.195:43668). May 27 04:00:06.619157 sshd[4535]: Accepted publickey for core from 139.178.68.195 port 43668 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:06.620780 sshd-session[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:06.629998 systemd-logind[1514]: New session 41 of user core. May 27 04:00:06.634058 systemd[1]: Started session-41.scope - Session 41 of User core. May 27 04:00:06.928948 sshd[4537]: Connection closed by 139.178.68.195 port 43668 May 27 04:00:06.929944 sshd-session[4535]: pam_unix(sshd:session): session closed for user core May 27 04:00:06.935669 systemd[1]: sshd@40-172.234.212.30:22-139.178.68.195:43668.service: Deactivated successfully. May 27 04:00:06.942106 systemd[1]: session-41.scope: Deactivated successfully. May 27 04:00:06.943597 systemd-logind[1514]: Session 41 logged out. Waiting for processes to exit. May 27 04:00:06.945314 systemd-logind[1514]: Removed session 41. May 27 04:00:11.083784 kubelet[2719]: E0527 04:00:11.083040 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:00:11.992809 systemd[1]: Started sshd@41-172.234.212.30:22-139.178.68.195:43680.service - OpenSSH per-connection server daemon (139.178.68.195:43680). May 27 04:00:12.326364 sshd[4549]: Accepted publickey for core from 139.178.68.195 port 43680 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:12.327978 sshd-session[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:12.333179 systemd-logind[1514]: New session 42 of user core. May 27 04:00:12.336020 systemd[1]: Started session-42.scope - Session 42 of User core. May 27 04:00:12.616137 sshd[4551]: Connection closed by 139.178.68.195 port 43680 May 27 04:00:12.617063 sshd-session[4549]: pam_unix(sshd:session): session closed for user core May 27 04:00:12.621229 systemd-logind[1514]: Session 42 logged out. Waiting for processes to exit. May 27 04:00:12.621763 systemd[1]: sshd@41-172.234.212.30:22-139.178.68.195:43680.service: Deactivated successfully. May 27 04:00:12.624023 systemd[1]: session-42.scope: Deactivated successfully. May 27 04:00:12.625920 systemd-logind[1514]: Removed session 42. May 27 04:00:17.680360 systemd[1]: Started sshd@42-172.234.212.30:22-139.178.68.195:44258.service - OpenSSH per-connection server daemon (139.178.68.195:44258). May 27 04:00:18.029524 sshd[4563]: Accepted publickey for core from 139.178.68.195 port 44258 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:18.030956 sshd-session[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:18.038429 systemd-logind[1514]: New session 43 of user core. May 27 04:00:18.053997 systemd[1]: Started session-43.scope - Session 43 of User core. May 27 04:00:18.342439 sshd[4565]: Connection closed by 139.178.68.195 port 44258 May 27 04:00:18.343868 sshd-session[4563]: pam_unix(sshd:session): session closed for user core May 27 04:00:18.348122 systemd[1]: sshd@42-172.234.212.30:22-139.178.68.195:44258.service: Deactivated successfully. May 27 04:00:18.351191 systemd[1]: session-43.scope: Deactivated successfully. May 27 04:00:18.352146 systemd-logind[1514]: Session 43 logged out. Waiting for processes to exit. May 27 04:00:18.353723 systemd-logind[1514]: Removed session 43. May 27 04:00:20.974920 containerd[1541]: time="2025-05-27T04:00:20.974781834Z" level=warning msg="container event discarded" container=d539859c0ea2963764f2d4931387d6f6a7a0cc7ec46a66f316b5d7a1ba366283 type=CONTAINER_CREATED_EVENT May 27 04:00:20.986680 containerd[1541]: time="2025-05-27T04:00:20.986610148Z" level=warning msg="container event discarded" container=d539859c0ea2963764f2d4931387d6f6a7a0cc7ec46a66f316b5d7a1ba366283 type=CONTAINER_STARTED_EVENT May 27 04:00:20.999890 containerd[1541]: time="2025-05-27T04:00:20.999838985Z" level=warning msg="container event discarded" container=67e99cec9bff0e62487bd0f66667cde6a5571ea1892f81492d7c09a1c8ea25f4 type=CONTAINER_CREATED_EVENT May 27 04:00:20.999890 containerd[1541]: time="2025-05-27T04:00:20.999862095Z" level=warning msg="container event discarded" container=67e99cec9bff0e62487bd0f66667cde6a5571ea1892f81492d7c09a1c8ea25f4 type=CONTAINER_STARTED_EVENT May 27 04:00:21.015258 containerd[1541]: time="2025-05-27T04:00:21.015183649Z" level=warning msg="container event discarded" container=42a302be6afd1397aa34a2163afe4b298ea047e1bbeab2ec908179b99b094b44 type=CONTAINER_CREATED_EVENT May 27 04:00:21.026590 containerd[1541]: time="2025-05-27T04:00:21.026535338Z" level=warning msg="container event discarded" container=bc2a837d6d1cfd7ed88afad23749ff9ec043d165791edfbf49d9f2da165645e4 type=CONTAINER_CREATED_EVENT May 27 04:00:21.026590 containerd[1541]: time="2025-05-27T04:00:21.026559588Z" level=warning msg="container event discarded" container=bc2a837d6d1cfd7ed88afad23749ff9ec043d165791edfbf49d9f2da165645e4 type=CONTAINER_STARTED_EVENT May 27 04:00:21.026590 containerd[1541]: time="2025-05-27T04:00:21.026567718Z" level=warning msg="container event discarded" container=b4088219bc30bb1e32bae144f5721209c2ac6fe4fbd2422bd4bcd2237155c77d type=CONTAINER_CREATED_EVENT May 27 04:00:21.043832 containerd[1541]: time="2025-05-27T04:00:21.043771215Z" level=warning msg="container event discarded" container=2c29a3e75a0d502a75fcf074debc44f1390c39c6448755b668d9170242cb4694 type=CONTAINER_CREATED_EVENT May 27 04:00:21.132497 containerd[1541]: time="2025-05-27T04:00:21.132448937Z" level=warning msg="container event discarded" container=42a302be6afd1397aa34a2163afe4b298ea047e1bbeab2ec908179b99b094b44 type=CONTAINER_STARTED_EVENT May 27 04:00:21.182672 containerd[1541]: time="2025-05-27T04:00:21.182620080Z" level=warning msg="container event discarded" container=2c29a3e75a0d502a75fcf074debc44f1390c39c6448755b668d9170242cb4694 type=CONTAINER_STARTED_EVENT May 27 04:00:21.201231 containerd[1541]: time="2025-05-27T04:00:21.201189458Z" level=warning msg="container event discarded" container=b4088219bc30bb1e32bae144f5721209c2ac6fe4fbd2422bd4bcd2237155c77d type=CONTAINER_STARTED_EVENT May 27 04:00:23.412067 systemd[1]: Started sshd@43-172.234.212.30:22-139.178.68.195:44268.service - OpenSSH per-connection server daemon (139.178.68.195:44268). May 27 04:00:23.755917 sshd[4578]: Accepted publickey for core from 139.178.68.195 port 44268 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:23.757128 sshd-session[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:23.768267 systemd-logind[1514]: New session 44 of user core. May 27 04:00:23.770283 systemd[1]: Started session-44.scope - Session 44 of User core. May 27 04:00:24.062894 sshd[4580]: Connection closed by 139.178.68.195 port 44268 May 27 04:00:24.064249 sshd-session[4578]: pam_unix(sshd:session): session closed for user core May 27 04:00:24.069090 systemd[1]: sshd@43-172.234.212.30:22-139.178.68.195:44268.service: Deactivated successfully. May 27 04:00:24.071425 systemd[1]: session-44.scope: Deactivated successfully. May 27 04:00:24.073377 systemd-logind[1514]: Session 44 logged out. Waiting for processes to exit. May 27 04:00:24.074540 systemd-logind[1514]: Removed session 44. May 27 04:00:29.127472 systemd[1]: Started sshd@44-172.234.212.30:22-139.178.68.195:47464.service - OpenSSH per-connection server daemon (139.178.68.195:47464). May 27 04:00:29.464106 sshd[4594]: Accepted publickey for core from 139.178.68.195 port 47464 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:29.465868 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:29.471576 systemd-logind[1514]: New session 45 of user core. May 27 04:00:29.477008 systemd[1]: Started session-45.scope - Session 45 of User core. May 27 04:00:29.761750 sshd[4596]: Connection closed by 139.178.68.195 port 47464 May 27 04:00:29.762458 sshd-session[4594]: pam_unix(sshd:session): session closed for user core May 27 04:00:29.766816 systemd-logind[1514]: Session 45 logged out. Waiting for processes to exit. May 27 04:00:29.767493 systemd[1]: sshd@44-172.234.212.30:22-139.178.68.195:47464.service: Deactivated successfully. May 27 04:00:29.770735 systemd[1]: session-45.scope: Deactivated successfully. May 27 04:00:29.772639 systemd-logind[1514]: Removed session 45. May 27 04:00:31.963072 containerd[1541]: time="2025-05-27T04:00:31.962933671Z" level=warning msg="container event discarded" container=77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0 type=CONTAINER_CREATED_EVENT May 27 04:00:31.963072 containerd[1541]: time="2025-05-27T04:00:31.963027542Z" level=warning msg="container event discarded" container=77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0 type=CONTAINER_STARTED_EVENT May 27 04:00:31.976442 containerd[1541]: time="2025-05-27T04:00:31.976367136Z" level=warning msg="container event discarded" container=66e7a149aebb384a52ec930287fdc56a675a02a2d8da1e2e5960bbd2af0e1e0a type=CONTAINER_CREATED_EVENT May 27 04:00:31.976442 containerd[1541]: time="2025-05-27T04:00:31.976414347Z" level=warning msg="container event discarded" container=66e7a149aebb384a52ec930287fdc56a675a02a2d8da1e2e5960bbd2af0e1e0a type=CONTAINER_STARTED_EVENT May 27 04:00:31.997678 containerd[1541]: time="2025-05-27T04:00:31.997617719Z" level=warning msg="container event discarded" container=bc96c0e8c72e9568cbb9260411b4ef3a560f2c3459b0fc27857ba01a05ebd0b6 type=CONTAINER_CREATED_EVENT May 27 04:00:32.131219 containerd[1541]: time="2025-05-27T04:00:32.131141396Z" level=warning msg="container event discarded" container=bc96c0e8c72e9568cbb9260411b4ef3a560f2c3459b0fc27857ba01a05ebd0b6 type=CONTAINER_STARTED_EVENT May 27 04:00:32.446011 containerd[1541]: time="2025-05-27T04:00:32.445935801Z" level=warning msg="container event discarded" container=68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2 type=CONTAINER_CREATED_EVENT May 27 04:00:32.446011 containerd[1541]: time="2025-05-27T04:00:32.445998412Z" level=warning msg="container event discarded" container=68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2 type=CONTAINER_STARTED_EVENT May 27 04:00:34.829077 systemd[1]: Started sshd@45-172.234.212.30:22-139.178.68.195:51266.service - OpenSSH per-connection server daemon (139.178.68.195:51266). May 27 04:00:35.167714 sshd[4610]: Accepted publickey for core from 139.178.68.195 port 51266 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:35.169539 sshd-session[4610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:35.179431 systemd-logind[1514]: New session 46 of user core. May 27 04:00:35.184017 systemd[1]: Started session-46.scope - Session 46 of User core. May 27 04:00:35.470928 sshd[4612]: Connection closed by 139.178.68.195 port 51266 May 27 04:00:35.471790 sshd-session[4610]: pam_unix(sshd:session): session closed for user core May 27 04:00:35.476665 systemd[1]: sshd@45-172.234.212.30:22-139.178.68.195:51266.service: Deactivated successfully. May 27 04:00:35.479307 systemd[1]: session-46.scope: Deactivated successfully. May 27 04:00:35.480247 systemd-logind[1514]: Session 46 logged out. Waiting for processes to exit. May 27 04:00:35.482001 systemd-logind[1514]: Removed session 46. May 27 04:00:37.452820 containerd[1541]: time="2025-05-27T04:00:37.452758136Z" level=warning msg="container event discarded" container=5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb type=CONTAINER_CREATED_EVENT May 27 04:00:37.517600 containerd[1541]: time="2025-05-27T04:00:37.517530115Z" level=warning msg="container event discarded" container=5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb type=CONTAINER_STARTED_EVENT May 27 04:00:37.625963 containerd[1541]: time="2025-05-27T04:00:37.625904853Z" level=warning msg="container event discarded" container=5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb type=CONTAINER_STOPPED_EVENT May 27 04:00:38.196438 containerd[1541]: time="2025-05-27T04:00:38.196363349Z" level=warning msg="container event discarded" container=1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731 type=CONTAINER_CREATED_EVENT May 27 04:00:38.255601 containerd[1541]: time="2025-05-27T04:00:38.255568815Z" level=warning msg="container event discarded" container=1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731 type=CONTAINER_STARTED_EVENT May 27 04:00:38.309978 containerd[1541]: time="2025-05-27T04:00:38.309918034Z" level=warning msg="container event discarded" container=1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731 type=CONTAINER_STOPPED_EVENT May 27 04:00:39.202574 containerd[1541]: time="2025-05-27T04:00:39.202507029Z" level=warning msg="container event discarded" container=83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a type=CONTAINER_CREATED_EVENT May 27 04:00:39.327961 containerd[1541]: time="2025-05-27T04:00:39.327905419Z" level=warning msg="container event discarded" container=83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a type=CONTAINER_STARTED_EVENT May 27 04:00:39.420326 containerd[1541]: time="2025-05-27T04:00:39.420284790Z" level=warning msg="container event discarded" container=83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a type=CONTAINER_STOPPED_EVENT May 27 04:00:39.509596 containerd[1541]: time="2025-05-27T04:00:39.509481236Z" level=warning msg="container event discarded" container=b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70 type=CONTAINER_CREATED_EVENT May 27 04:00:39.576890 containerd[1541]: time="2025-05-27T04:00:39.576823812Z" level=warning msg="container event discarded" container=b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70 type=CONTAINER_STARTED_EVENT May 27 04:00:40.082783 kubelet[2719]: E0527 04:00:40.082739 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:00:40.212522 containerd[1541]: time="2025-05-27T04:00:40.212434352Z" level=warning msg="container event discarded" container=583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743 type=CONTAINER_CREATED_EVENT May 27 04:00:40.351113 containerd[1541]: time="2025-05-27T04:00:40.350957809Z" level=warning msg="container event discarded" container=583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743 type=CONTAINER_STARTED_EVENT May 27 04:00:40.413364 containerd[1541]: time="2025-05-27T04:00:40.413288414Z" level=warning msg="container event discarded" container=583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743 type=CONTAINER_STOPPED_EVENT May 27 04:00:40.534954 systemd[1]: Started sshd@46-172.234.212.30:22-139.178.68.195:51280.service - OpenSSH per-connection server daemon (139.178.68.195:51280). May 27 04:00:40.881532 sshd[4624]: Accepted publickey for core from 139.178.68.195 port 51280 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:40.883362 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:40.889862 systemd-logind[1514]: New session 47 of user core. May 27 04:00:40.896080 systemd[1]: Started session-47.scope - Session 47 of User core. May 27 04:00:41.193188 sshd[4626]: Connection closed by 139.178.68.195 port 51280 May 27 04:00:41.193822 sshd-session[4624]: pam_unix(sshd:session): session closed for user core May 27 04:00:41.197947 systemd-logind[1514]: Session 47 logged out. Waiting for processes to exit. May 27 04:00:41.198868 systemd[1]: sshd@46-172.234.212.30:22-139.178.68.195:51280.service: Deactivated successfully. May 27 04:00:41.201208 systemd[1]: session-47.scope: Deactivated successfully. May 27 04:00:41.202679 systemd-logind[1514]: Removed session 47. May 27 04:00:41.231172 containerd[1541]: time="2025-05-27T04:00:41.231107451Z" level=warning msg="container event discarded" container=6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd type=CONTAINER_CREATED_EVENT May 27 04:00:41.311381 containerd[1541]: time="2025-05-27T04:00:41.311308181Z" level=warning msg="container event discarded" container=6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd type=CONTAINER_STARTED_EVENT May 27 04:00:45.083419 kubelet[2719]: E0527 04:00:45.083082 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:00:46.260128 systemd[1]: Started sshd@47-172.234.212.30:22-139.178.68.195:38776.service - OpenSSH per-connection server daemon (139.178.68.195:38776). May 27 04:00:46.600034 sshd[4639]: Accepted publickey for core from 139.178.68.195 port 38776 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:46.601661 sshd-session[4639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:46.607315 systemd-logind[1514]: New session 48 of user core. May 27 04:00:46.615012 systemd[1]: Started session-48.scope - Session 48 of User core. May 27 04:00:46.900978 sshd[4641]: Connection closed by 139.178.68.195 port 38776 May 27 04:00:46.902101 sshd-session[4639]: pam_unix(sshd:session): session closed for user core May 27 04:00:46.906380 systemd-logind[1514]: Session 48 logged out. Waiting for processes to exit. May 27 04:00:46.907365 systemd[1]: sshd@47-172.234.212.30:22-139.178.68.195:38776.service: Deactivated successfully. May 27 04:00:46.910992 systemd[1]: session-48.scope: Deactivated successfully. May 27 04:00:46.913012 systemd-logind[1514]: Removed session 48. May 27 04:00:47.083213 kubelet[2719]: E0527 04:00:47.082664 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:00:48.399384 containerd[1541]: time="2025-05-27T04:00:48.399313255Z" level=warning msg="container event discarded" container=0a3295a78438c57d92949c6315961e477bd8109a29fd9d54eb9fd7d479bc4264 type=CONTAINER_CREATED_EVENT May 27 04:00:48.399806 containerd[1541]: time="2025-05-27T04:00:48.399778425Z" level=warning msg="container event discarded" container=0a3295a78438c57d92949c6315961e477bd8109a29fd9d54eb9fd7d479bc4264 type=CONTAINER_STARTED_EVENT May 27 04:00:48.426006 containerd[1541]: time="2025-05-27T04:00:48.425978226Z" level=warning msg="container event discarded" container=ade0e4f5900aa3faa7a1fd657a75346c89658d60a7bc5c730b8113630babac72 type=CONTAINER_CREATED_EVENT May 27 04:00:48.426077 containerd[1541]: time="2025-05-27T04:00:48.426008737Z" level=warning msg="container event discarded" container=b14bc9e489a23b8bef7fabedf9d5776f0c4f152502e54f40db73492f6b3a1c19 type=CONTAINER_CREATED_EVENT May 27 04:00:48.426077 containerd[1541]: time="2025-05-27T04:00:48.426017457Z" level=warning msg="container event discarded" container=b14bc9e489a23b8bef7fabedf9d5776f0c4f152502e54f40db73492f6b3a1c19 type=CONTAINER_STARTED_EVENT May 27 04:00:48.467315 containerd[1541]: time="2025-05-27T04:00:48.467262152Z" level=warning msg="container event discarded" container=157f49776630a261d17aea680e02c9eb4b9a346e49d7378ffb534cc841f46234 type=CONTAINER_CREATED_EVENT May 27 04:00:48.524617 containerd[1541]: time="2025-05-27T04:00:48.524576553Z" level=warning msg="container event discarded" container=ade0e4f5900aa3faa7a1fd657a75346c89658d60a7bc5c730b8113630babac72 type=CONTAINER_STARTED_EVENT May 27 04:00:48.563854 containerd[1541]: time="2025-05-27T04:00:48.563822914Z" level=warning msg="container event discarded" container=157f49776630a261d17aea680e02c9eb4b9a346e49d7378ffb534cc841f46234 type=CONTAINER_STARTED_EVENT May 27 04:00:51.968062 systemd[1]: Started sshd@48-172.234.212.30:22-139.178.68.195:38778.service - OpenSSH per-connection server daemon (139.178.68.195:38778). May 27 04:00:52.316971 sshd[4653]: Accepted publickey for core from 139.178.68.195 port 38778 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:52.318200 sshd-session[4653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:52.323941 systemd-logind[1514]: New session 49 of user core. May 27 04:00:52.332191 systemd[1]: Started session-49.scope - Session 49 of User core. May 27 04:00:52.623965 sshd[4655]: Connection closed by 139.178.68.195 port 38778 May 27 04:00:52.625075 sshd-session[4653]: pam_unix(sshd:session): session closed for user core May 27 04:00:52.629621 systemd[1]: sshd@48-172.234.212.30:22-139.178.68.195:38778.service: Deactivated successfully. May 27 04:00:52.631776 systemd[1]: session-49.scope: Deactivated successfully. May 27 04:00:52.632821 systemd-logind[1514]: Session 49 logged out. Waiting for processes to exit. May 27 04:00:52.634554 systemd-logind[1514]: Removed session 49. May 27 04:00:53.082940 kubelet[2719]: E0527 04:00:53.082583 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:00:56.082752 kubelet[2719]: E0527 04:00:56.082714 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:00:57.685114 systemd[1]: Started sshd@49-172.234.212.30:22-139.178.68.195:57142.service - OpenSSH per-connection server daemon (139.178.68.195:57142). May 27 04:00:58.026827 sshd[4667]: Accepted publickey for core from 139.178.68.195 port 57142 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:00:58.028523 sshd-session[4667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:00:58.034377 systemd-logind[1514]: New session 50 of user core. May 27 04:00:58.041055 systemd[1]: Started session-50.scope - Session 50 of User core. May 27 04:00:58.326813 sshd[4669]: Connection closed by 139.178.68.195 port 57142 May 27 04:00:58.328071 sshd-session[4667]: pam_unix(sshd:session): session closed for user core May 27 04:00:58.333025 systemd[1]: sshd@49-172.234.212.30:22-139.178.68.195:57142.service: Deactivated successfully. May 27 04:00:58.335588 systemd[1]: session-50.scope: Deactivated successfully. May 27 04:00:58.336721 systemd-logind[1514]: Session 50 logged out. Waiting for processes to exit. May 27 04:00:58.338218 systemd-logind[1514]: Removed session 50. May 27 04:01:03.082906 kubelet[2719]: E0527 04:01:03.082806 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:03.387226 systemd[1]: Started sshd@50-172.234.212.30:22-139.178.68.195:57158.service - OpenSSH per-connection server daemon (139.178.68.195:57158). May 27 04:01:03.718117 sshd[4683]: Accepted publickey for core from 139.178.68.195 port 57158 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:01:03.719745 sshd-session[4683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:01:03.726831 systemd-logind[1514]: New session 51 of user core. May 27 04:01:03.732015 systemd[1]: Started session-51.scope - Session 51 of User core. May 27 04:01:04.024003 sshd[4685]: Connection closed by 139.178.68.195 port 57158 May 27 04:01:04.024822 sshd-session[4683]: pam_unix(sshd:session): session closed for user core May 27 04:01:04.028031 systemd[1]: sshd@50-172.234.212.30:22-139.178.68.195:57158.service: Deactivated successfully. May 27 04:01:04.030354 systemd[1]: session-51.scope: Deactivated successfully. May 27 04:01:04.033808 systemd-logind[1514]: Session 51 logged out. Waiting for processes to exit. May 27 04:01:04.035076 systemd-logind[1514]: Removed session 51. May 27 04:01:05.083619 kubelet[2719]: E0527 04:01:05.083052 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:09.082734 systemd[1]: Started sshd@51-172.234.212.30:22-139.178.68.195:57222.service - OpenSSH per-connection server daemon (139.178.68.195:57222). May 27 04:01:09.416204 sshd[4697]: Accepted publickey for core from 139.178.68.195 port 57222 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:01:09.417651 sshd-session[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:01:09.422856 systemd-logind[1514]: New session 52 of user core. May 27 04:01:09.431025 systemd[1]: Started session-52.scope - Session 52 of User core. May 27 04:01:09.715518 sshd[4699]: Connection closed by 139.178.68.195 port 57222 May 27 04:01:09.717086 sshd-session[4697]: pam_unix(sshd:session): session closed for user core May 27 04:01:09.721123 systemd[1]: sshd@51-172.234.212.30:22-139.178.68.195:57222.service: Deactivated successfully. May 27 04:01:09.723520 systemd[1]: session-52.scope: Deactivated successfully. May 27 04:01:09.725301 systemd-logind[1514]: Session 52 logged out. Waiting for processes to exit. May 27 04:01:09.726456 systemd-logind[1514]: Removed session 52. May 27 04:01:14.082826 kubelet[2719]: E0527 04:01:14.082789 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:14.793392 systemd[1]: Started sshd@52-172.234.212.30:22-139.178.68.195:44404.service - OpenSSH per-connection server daemon (139.178.68.195:44404). May 27 04:01:15.141988 sshd[4711]: Accepted publickey for core from 139.178.68.195 port 44404 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:01:15.142993 sshd-session[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:01:15.149073 systemd-logind[1514]: New session 53 of user core. May 27 04:01:15.155274 systemd[1]: Started session-53.scope - Session 53 of User core. May 27 04:01:15.448855 sshd[4713]: Connection closed by 139.178.68.195 port 44404 May 27 04:01:15.450061 sshd-session[4711]: pam_unix(sshd:session): session closed for user core May 27 04:01:15.454202 systemd-logind[1514]: Session 53 logged out. Waiting for processes to exit. May 27 04:01:15.455037 systemd[1]: sshd@52-172.234.212.30:22-139.178.68.195:44404.service: Deactivated successfully. May 27 04:01:15.457381 systemd[1]: session-53.scope: Deactivated successfully. May 27 04:01:15.459099 systemd-logind[1514]: Removed session 53. May 27 04:01:15.510671 systemd[1]: Started sshd@53-172.234.212.30:22-139.178.68.195:44408.service - OpenSSH per-connection server daemon (139.178.68.195:44408). May 27 04:01:15.860059 sshd[4725]: Accepted publickey for core from 139.178.68.195 port 44408 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:01:15.861201 sshd-session[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:01:15.870145 systemd-logind[1514]: New session 54 of user core. May 27 04:01:15.876082 systemd[1]: Started session-54.scope - Session 54 of User core. May 27 04:01:17.333900 containerd[1541]: time="2025-05-27T04:01:17.331691393Z" level=info msg="StopContainer for \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\" with timeout 30 (s)" May 27 04:01:17.333900 containerd[1541]: time="2025-05-27T04:01:17.333766207Z" level=info msg="Stop container \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\" with signal terminated" May 27 04:01:17.349584 systemd[1]: cri-containerd-b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70.scope: Deactivated successfully. May 27 04:01:17.351551 containerd[1541]: time="2025-05-27T04:01:17.351476777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\" id:\"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\" pid:3287 exited_at:{seconds:1748318477 nanos:350473002}" May 27 04:01:17.351602 containerd[1541]: time="2025-05-27T04:01:17.351519118Z" level=info msg="received exit event container_id:\"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\" id:\"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\" pid:3287 exited_at:{seconds:1748318477 nanos:350473002}" May 27 04:01:17.372662 containerd[1541]: time="2025-05-27T04:01:17.372267509Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 04:01:17.379743 containerd[1541]: time="2025-05-27T04:01:17.379713633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\" id:\"68a666ad77e886a29a592d25c29d73d12c25bc57d0edd93e5cd2793464b3f9f4\" pid:4755 exited_at:{seconds:1748318477 nanos:379520127}" May 27 04:01:17.384721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70-rootfs.mount: Deactivated successfully. May 27 04:01:17.387487 containerd[1541]: time="2025-05-27T04:01:17.387361222Z" level=info msg="StopContainer for \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\" with timeout 2 (s)" May 27 04:01:17.387994 containerd[1541]: time="2025-05-27T04:01:17.387837675Z" level=info msg="Stop container \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\" with signal terminated" May 27 04:01:17.395284 containerd[1541]: time="2025-05-27T04:01:17.395260107Z" level=info msg="StopContainer for \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\" returns successfully" May 27 04:01:17.396271 containerd[1541]: time="2025-05-27T04:01:17.396251424Z" level=info msg="StopPodSandbox for \"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\"" May 27 04:01:17.396335 containerd[1541]: time="2025-05-27T04:01:17.396297285Z" level=info msg="Container to stop \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 04:01:17.398065 systemd-networkd[1465]: lxc_health: Link DOWN May 27 04:01:17.399120 systemd-networkd[1465]: lxc_health: Lost carrier May 27 04:01:17.410608 systemd[1]: cri-containerd-68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2.scope: Deactivated successfully. May 27 04:01:17.414131 containerd[1541]: time="2025-05-27T04:01:17.412340942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\" id:\"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\" pid:2993 exit_status:137 exited_at:{seconds:1748318477 nanos:411541951}" May 27 04:01:17.421332 systemd[1]: cri-containerd-6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd.scope: Deactivated successfully. May 27 04:01:17.421647 systemd[1]: cri-containerd-6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd.scope: Consumed 6.929s CPU time, 123.9M memory peak, 144K read from disk, 13.3M written to disk. May 27 04:01:17.423575 containerd[1541]: time="2025-05-27T04:01:17.423550434Z" level=info msg="received exit event container_id:\"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\" id:\"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\" pid:3378 exited_at:{seconds:1748318477 nanos:423100232}" May 27 04:01:17.455714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd-rootfs.mount: Deactivated successfully. May 27 04:01:17.465813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2-rootfs.mount: Deactivated successfully. May 27 04:01:17.469705 containerd[1541]: time="2025-05-27T04:01:17.469222923Z" level=info msg="StopContainer for \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\" returns successfully" May 27 04:01:17.470264 containerd[1541]: time="2025-05-27T04:01:17.469977713Z" level=info msg="shim disconnected" id=68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2 namespace=k8s.io May 27 04:01:17.470310 containerd[1541]: time="2025-05-27T04:01:17.470263060Z" level=warning msg="cleaning up after shim disconnected" id=68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2 namespace=k8s.io May 27 04:01:17.470310 containerd[1541]: time="2025-05-27T04:01:17.470272511Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 04:01:17.471671 containerd[1541]: time="2025-05-27T04:01:17.471653076Z" level=info msg="StopPodSandbox for \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\"" May 27 04:01:17.471837 containerd[1541]: time="2025-05-27T04:01:17.471823891Z" level=info msg="Container to stop \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 04:01:17.471956 containerd[1541]: time="2025-05-27T04:01:17.471942354Z" level=info msg="Container to stop \"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 04:01:17.472030 containerd[1541]: time="2025-05-27T04:01:17.472018516Z" level=info msg="Container to stop \"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 04:01:17.472091 containerd[1541]: time="2025-05-27T04:01:17.472067648Z" level=info msg="Container to stop \"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 04:01:17.472146 containerd[1541]: time="2025-05-27T04:01:17.472134599Z" level=info msg="Container to stop \"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 04:01:17.480206 systemd[1]: cri-containerd-77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0.scope: Deactivated successfully. May 27 04:01:17.499278 containerd[1541]: time="2025-05-27T04:01:17.497559320Z" level=info msg="received exit event sandbox_id:\"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\" exit_status:137 exited_at:{seconds:1748318477 nanos:411541951}" May 27 04:01:17.500798 containerd[1541]: time="2025-05-27T04:01:17.500760404Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\" id:\"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\" pid:3378 exited_at:{seconds:1748318477 nanos:423100232}" May 27 04:01:17.500798 containerd[1541]: time="2025-05-27T04:01:17.500793815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" id:\"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" pid:2870 exit_status:137 exited_at:{seconds:1748318477 nanos:482164960}" May 27 04:01:17.501087 containerd[1541]: time="2025-05-27T04:01:17.501058982Z" level=info msg="TearDown network for sandbox \"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\" successfully" May 27 04:01:17.501087 containerd[1541]: time="2025-05-27T04:01:17.501078402Z" level=info msg="StopPodSandbox for \"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\" returns successfully" May 27 04:01:17.501446 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2-shm.mount: Deactivated successfully. May 27 04:01:17.513371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0-rootfs.mount: Deactivated successfully. May 27 04:01:17.517844 containerd[1541]: time="2025-05-27T04:01:17.517807038Z" level=info msg="shim disconnected" id=77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0 namespace=k8s.io May 27 04:01:17.517844 containerd[1541]: time="2025-05-27T04:01:17.517839929Z" level=warning msg="cleaning up after shim disconnected" id=77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0 namespace=k8s.io May 27 04:01:17.518979 containerd[1541]: time="2025-05-27T04:01:17.517853320Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 04:01:17.519229 containerd[1541]: time="2025-05-27T04:01:17.518389773Z" level=info msg="received exit event sandbox_id:\"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" exit_status:137 exited_at:{seconds:1748318477 nanos:482164960}" May 27 04:01:17.520864 containerd[1541]: time="2025-05-27T04:01:17.520790906Z" level=info msg="TearDown network for sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" successfully" May 27 04:01:17.520864 containerd[1541]: time="2025-05-27T04:01:17.520816236Z" level=info msg="StopPodSandbox for \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" returns successfully" May 27 04:01:17.568746 kubelet[2719]: I0527 04:01:17.568707 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-etc-cni-netd\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.568746 kubelet[2719]: I0527 04:01:17.568749 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc40d681-1020-4117-8945-1be416a58bee-cilium-config-path\") pod \"cc40d681-1020-4117-8945-1be416a58bee\" (UID: \"cc40d681-1020-4117-8945-1be416a58bee\") " May 27 04:01:17.568746 kubelet[2719]: I0527 04:01:17.568766 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-hostproc\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.569261 kubelet[2719]: I0527 04:01:17.568782 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cilium-cgroup\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.569261 kubelet[2719]: I0527 04:01:17.568799 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdwtg\" (UniqueName: \"kubernetes.io/projected/3bd2b0ce-f53c-403f-999d-f88cb9399e82-kube-api-access-rdwtg\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.569261 kubelet[2719]: I0527 04:01:17.568815 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bd2b0ce-f53c-403f-999d-f88cb9399e82-clustermesh-secrets\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.569261 kubelet[2719]: I0527 04:01:17.568829 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cni-path\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.569261 kubelet[2719]: I0527 04:01:17.568845 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-host-proc-sys-net\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.569261 kubelet[2719]: I0527 04:01:17.568858 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-xtables-lock\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.569413 kubelet[2719]: I0527 04:01:17.568895 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-host-proc-sys-kernel\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.569413 kubelet[2719]: I0527 04:01:17.568910 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-bpf-maps\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.569413 kubelet[2719]: I0527 04:01:17.568926 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxnb4\" (UniqueName: \"kubernetes.io/projected/cc40d681-1020-4117-8945-1be416a58bee-kube-api-access-hxnb4\") pod \"cc40d681-1020-4117-8945-1be416a58bee\" (UID: \"cc40d681-1020-4117-8945-1be416a58bee\") " May 27 04:01:17.569413 kubelet[2719]: I0527 04:01:17.568942 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bd2b0ce-f53c-403f-999d-f88cb9399e82-hubble-tls\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.569413 kubelet[2719]: I0527 04:01:17.568956 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-lib-modules\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.569413 kubelet[2719]: I0527 04:01:17.568970 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cilium-run\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.569547 kubelet[2719]: I0527 04:01:17.568985 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cilium-config-path\") pod \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\" (UID: \"3bd2b0ce-f53c-403f-999d-f88cb9399e82\") " May 27 04:01:17.570894 kubelet[2719]: I0527 04:01:17.569598 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:01:17.570894 kubelet[2719]: I0527 04:01:17.569638 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:01:17.572237 kubelet[2719]: I0527 04:01:17.572213 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 04:01:17.572290 kubelet[2719]: I0527 04:01:17.572250 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:01:17.572290 kubelet[2719]: I0527 04:01:17.572266 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:01:17.572290 kubelet[2719]: I0527 04:01:17.572279 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:01:17.573134 kubelet[2719]: I0527 04:01:17.573115 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc40d681-1020-4117-8945-1be416a58bee-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cc40d681-1020-4117-8945-1be416a58bee" (UID: "cc40d681-1020-4117-8945-1be416a58bee"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 04:01:17.573216 kubelet[2719]: I0527 04:01:17.573202 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-hostproc" (OuterVolumeSpecName: "hostproc") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:01:17.573271 kubelet[2719]: I0527 04:01:17.573259 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:01:17.575281 kubelet[2719]: I0527 04:01:17.575256 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc40d681-1020-4117-8945-1be416a58bee-kube-api-access-hxnb4" (OuterVolumeSpecName: "kube-api-access-hxnb4") pod "cc40d681-1020-4117-8945-1be416a58bee" (UID: "cc40d681-1020-4117-8945-1be416a58bee"). InnerVolumeSpecName "kube-api-access-hxnb4". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 04:01:17.575714 kubelet[2719]: I0527 04:01:17.575693 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd2b0ce-f53c-403f-999d-f88cb9399e82-kube-api-access-rdwtg" (OuterVolumeSpecName: "kube-api-access-rdwtg") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "kube-api-access-rdwtg". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 04:01:17.577690 kubelet[2719]: I0527 04:01:17.577664 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd2b0ce-f53c-403f-999d-f88cb9399e82-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 04:01:17.577741 kubelet[2719]: I0527 04:01:17.577698 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:01:17.577741 kubelet[2719]: I0527 04:01:17.577714 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:01:17.577741 kubelet[2719]: I0527 04:01:17.577727 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cni-path" (OuterVolumeSpecName: "cni-path") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:01:17.578680 kubelet[2719]: I0527 04:01:17.578661 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd2b0ce-f53c-403f-999d-f88cb9399e82-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3bd2b0ce-f53c-403f-999d-f88cb9399e82" (UID: "3bd2b0ce-f53c-403f-999d-f88cb9399e82"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 04:01:17.670116 kubelet[2719]: I0527 04:01:17.670072 2719 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-etc-cni-netd\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670116 kubelet[2719]: I0527 04:01:17.670116 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc40d681-1020-4117-8945-1be416a58bee-cilium-config-path\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670258 kubelet[2719]: I0527 04:01:17.670136 2719 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-hostproc\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670258 kubelet[2719]: I0527 04:01:17.670149 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cilium-cgroup\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670258 kubelet[2719]: I0527 04:01:17.670162 2719 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rdwtg\" (UniqueName: \"kubernetes.io/projected/3bd2b0ce-f53c-403f-999d-f88cb9399e82-kube-api-access-rdwtg\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670258 kubelet[2719]: I0527 04:01:17.670176 2719 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bd2b0ce-f53c-403f-999d-f88cb9399e82-clustermesh-secrets\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670258 kubelet[2719]: I0527 04:01:17.670189 2719 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cni-path\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670258 kubelet[2719]: I0527 04:01:17.670200 2719 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-host-proc-sys-net\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670258 kubelet[2719]: I0527 04:01:17.670213 2719 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-xtables-lock\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670258 kubelet[2719]: I0527 04:01:17.670226 2719 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-host-proc-sys-kernel\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670432 kubelet[2719]: I0527 04:01:17.670239 2719 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-bpf-maps\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670432 kubelet[2719]: I0527 04:01:17.670252 2719 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hxnb4\" (UniqueName: \"kubernetes.io/projected/cc40d681-1020-4117-8945-1be416a58bee-kube-api-access-hxnb4\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670432 kubelet[2719]: I0527 04:01:17.670267 2719 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bd2b0ce-f53c-403f-999d-f88cb9399e82-hubble-tls\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670432 kubelet[2719]: I0527 04:01:17.670280 2719 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-lib-modules\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670432 kubelet[2719]: I0527 04:01:17.670293 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cilium-run\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.670432 kubelet[2719]: I0527 04:01:17.670336 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bd2b0ce-f53c-403f-999d-f88cb9399e82-cilium-config-path\") on node \"172-234-212-30\" DevicePath \"\"" May 27 04:01:17.870027 kubelet[2719]: I0527 04:01:17.869998 2719 scope.go:117] "RemoveContainer" containerID="b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70" May 27 04:01:17.873100 containerd[1541]: time="2025-05-27T04:01:17.872622486Z" level=info msg="RemoveContainer for \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\"" May 27 04:01:17.877014 systemd[1]: Removed slice kubepods-besteffort-podcc40d681_1020_4117_8945_1be416a58bee.slice - libcontainer container kubepods-besteffort-podcc40d681_1020_4117_8945_1be416a58bee.slice. May 27 04:01:17.881168 containerd[1541]: time="2025-05-27T04:01:17.881087176Z" level=info msg="RemoveContainer for \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\" returns successfully" May 27 04:01:17.881482 kubelet[2719]: I0527 04:01:17.881451 2719 scope.go:117] "RemoveContainer" containerID="b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70" May 27 04:01:17.886336 containerd[1541]: time="2025-05-27T04:01:17.886118627Z" level=error msg="ContainerStatus for \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\": not found" May 27 04:01:17.886411 kubelet[2719]: E0527 04:01:17.886376 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\": not found" containerID="b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70" May 27 04:01:17.886715 kubelet[2719]: I0527 04:01:17.886400 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70"} err="failed to get container status \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1c40a106eacba33f52d64b9d95f5fa76babeeb62d49025637c2e9533df43a70\": not found" May 27 04:01:17.886715 kubelet[2719]: I0527 04:01:17.886473 2719 scope.go:117] "RemoveContainer" containerID="6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd" May 27 04:01:17.887705 systemd[1]: Removed slice kubepods-burstable-pod3bd2b0ce_f53c_403f_999d_f88cb9399e82.slice - libcontainer container kubepods-burstable-pod3bd2b0ce_f53c_403f_999d_f88cb9399e82.slice. May 27 04:01:17.887818 systemd[1]: kubepods-burstable-pod3bd2b0ce_f53c_403f_999d_f88cb9399e82.slice: Consumed 7.031s CPU time, 124.4M memory peak, 144K read from disk, 13.3M written to disk. May 27 04:01:17.889629 containerd[1541]: time="2025-05-27T04:01:17.889604357Z" level=info msg="RemoveContainer for \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\"" May 27 04:01:17.895954 containerd[1541]: time="2025-05-27T04:01:17.895919712Z" level=info msg="RemoveContainer for \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\" returns successfully" May 27 04:01:17.896590 kubelet[2719]: I0527 04:01:17.896036 2719 scope.go:117] "RemoveContainer" containerID="583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743" May 27 04:01:17.897564 containerd[1541]: time="2025-05-27T04:01:17.897376270Z" level=info msg="RemoveContainer for \"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\"" May 27 04:01:17.901084 containerd[1541]: time="2025-05-27T04:01:17.901051766Z" level=info msg="RemoveContainer for \"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\" returns successfully" May 27 04:01:17.901214 kubelet[2719]: I0527 04:01:17.901183 2719 scope.go:117] "RemoveContainer" containerID="83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a" May 27 04:01:17.903895 containerd[1541]: time="2025-05-27T04:01:17.903853608Z" level=info msg="RemoveContainer for \"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\"" May 27 04:01:17.908180 containerd[1541]: time="2025-05-27T04:01:17.908156851Z" level=info msg="RemoveContainer for \"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\" returns successfully" May 27 04:01:17.908420 kubelet[2719]: I0527 04:01:17.908399 2719 scope.go:117] "RemoveContainer" containerID="1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731" May 27 04:01:17.909586 containerd[1541]: time="2025-05-27T04:01:17.909556607Z" level=info msg="RemoveContainer for \"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\"" May 27 04:01:17.912427 containerd[1541]: time="2025-05-27T04:01:17.912397341Z" level=info msg="RemoveContainer for \"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\" returns successfully" May 27 04:01:17.912928 kubelet[2719]: I0527 04:01:17.912526 2719 scope.go:117] "RemoveContainer" containerID="5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb" May 27 04:01:17.913650 containerd[1541]: time="2025-05-27T04:01:17.913621963Z" level=info msg="RemoveContainer for \"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\"" May 27 04:01:17.923497 containerd[1541]: time="2025-05-27T04:01:17.923408607Z" level=info msg="RemoveContainer for \"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\" returns successfully" May 27 04:01:17.924591 kubelet[2719]: I0527 04:01:17.923826 2719 scope.go:117] "RemoveContainer" containerID="6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd" May 27 04:01:17.925396 containerd[1541]: time="2025-05-27T04:01:17.925355128Z" level=error msg="ContainerStatus for \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\": not found" May 27 04:01:17.925848 kubelet[2719]: E0527 04:01:17.925730 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\": not found" containerID="6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd" May 27 04:01:17.925848 kubelet[2719]: I0527 04:01:17.925759 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd"} err="failed to get container status \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"6bf7a76fe7535163f5803d35f8c303e5fb4a056bf8de5a1820102097d863f0bd\": not found" May 27 04:01:17.925848 kubelet[2719]: I0527 04:01:17.925782 2719 scope.go:117] "RemoveContainer" containerID="583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743" May 27 04:01:17.927030 containerd[1541]: time="2025-05-27T04:01:17.926827307Z" level=error msg="ContainerStatus for \"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\": not found" May 27 04:01:17.927809 kubelet[2719]: E0527 04:01:17.927787 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\": not found" containerID="583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743" May 27 04:01:17.927981 kubelet[2719]: I0527 04:01:17.927961 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743"} err="failed to get container status \"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\": rpc error: code = NotFound desc = an error occurred when try to find container \"583626ef0b9b34188636f9a481c23321987f38b1ce6d8da714e3df7700d00743\": not found" May 27 04:01:17.928128 kubelet[2719]: I0527 04:01:17.928036 2719 scope.go:117] "RemoveContainer" containerID="83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a" May 27 04:01:17.928732 containerd[1541]: time="2025-05-27T04:01:17.928702446Z" level=error msg="ContainerStatus for \"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\": not found" May 27 04:01:17.928801 kubelet[2719]: E0527 04:01:17.928788 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\": not found" containerID="83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a" May 27 04:01:17.928838 kubelet[2719]: I0527 04:01:17.928803 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a"} err="failed to get container status \"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\": rpc error: code = NotFound desc = an error occurred when try to find container \"83a78791305745e5c0be390f7273540767660797e7b7794b8cb6f8e19188345a\": not found" May 27 04:01:17.928838 kubelet[2719]: I0527 04:01:17.928816 2719 scope.go:117] "RemoveContainer" containerID="1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731" May 27 04:01:17.929049 containerd[1541]: time="2025-05-27T04:01:17.928966202Z" level=error msg="ContainerStatus for \"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\": not found" May 27 04:01:17.929105 kubelet[2719]: E0527 04:01:17.929059 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\": not found" containerID="1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731" May 27 04:01:17.929203 kubelet[2719]: I0527 04:01:17.929117 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731"} err="failed to get container status \"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a53c6e067fd279ca2ace868b58cbd1dcb1fcf4395411b282ddac1eb16fa3731\": not found" May 27 04:01:17.929203 kubelet[2719]: I0527 04:01:17.929130 2719 scope.go:117] "RemoveContainer" containerID="5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb" May 27 04:01:17.929314 containerd[1541]: time="2025-05-27T04:01:17.929239969Z" level=error msg="ContainerStatus for \"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\": not found" May 27 04:01:17.929406 kubelet[2719]: E0527 04:01:17.929319 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\": not found" containerID="5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb" May 27 04:01:17.929406 kubelet[2719]: I0527 04:01:17.929334 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb"} err="failed to get container status \"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\": rpc error: code = NotFound desc = an error occurred when try to find container \"5400fdddd831188907ffb2e75e72b5e2005a34a6417b95c03cda30783ff18abb\": not found" May 27 04:01:18.384210 systemd[1]: var-lib-kubelet-pods-cc40d681\x2d1020\x2d4117\x2d8945\x2d1be416a58bee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhxnb4.mount: Deactivated successfully. May 27 04:01:18.384329 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0-shm.mount: Deactivated successfully. May 27 04:01:18.384416 systemd[1]: var-lib-kubelet-pods-3bd2b0ce\x2df53c\x2d403f\x2d999d\x2df88cb9399e82-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drdwtg.mount: Deactivated successfully. May 27 04:01:18.384492 systemd[1]: var-lib-kubelet-pods-3bd2b0ce\x2df53c\x2d403f\x2d999d\x2df88cb9399e82-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 04:01:18.384568 systemd[1]: var-lib-kubelet-pods-3bd2b0ce\x2df53c\x2d403f\x2d999d\x2df88cb9399e82-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 04:01:19.085114 kubelet[2719]: I0527 04:01:19.085063 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd2b0ce-f53c-403f-999d-f88cb9399e82" path="/var/lib/kubelet/pods/3bd2b0ce-f53c-403f-999d-f88cb9399e82/volumes" May 27 04:01:19.085919 kubelet[2719]: I0527 04:01:19.085895 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc40d681-1020-4117-8945-1be416a58bee" path="/var/lib/kubelet/pods/cc40d681-1020-4117-8945-1be416a58bee/volumes" May 27 04:01:19.343933 sshd[4727]: Connection closed by 139.178.68.195 port 44408 May 27 04:01:19.346068 sshd-session[4725]: pam_unix(sshd:session): session closed for user core May 27 04:01:19.350553 systemd[1]: sshd@53-172.234.212.30:22-139.178.68.195:44408.service: Deactivated successfully. May 27 04:01:19.352733 systemd[1]: session-54.scope: Deactivated successfully. May 27 04:01:19.355820 systemd-logind[1514]: Session 54 logged out. Waiting for processes to exit. May 27 04:01:19.356953 systemd-logind[1514]: Removed session 54. May 27 04:01:19.409333 systemd[1]: Started sshd@54-172.234.212.30:22-139.178.68.195:44414.service - OpenSSH per-connection server daemon (139.178.68.195:44414). May 27 04:01:19.757322 sshd[4882]: Accepted publickey for core from 139.178.68.195 port 44414 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:01:19.757862 sshd-session[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:01:19.763807 systemd-logind[1514]: New session 55 of user core. May 27 04:01:19.770022 systemd[1]: Started session-55.scope - Session 55 of User core. May 27 04:01:20.241638 kubelet[2719]: E0527 04:01:20.241569 2719 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 04:01:20.333896 kubelet[2719]: I0527 04:01:20.333600 2719 memory_manager.go:355] "RemoveStaleState removing state" podUID="3bd2b0ce-f53c-403f-999d-f88cb9399e82" containerName="cilium-agent" May 27 04:01:20.333896 kubelet[2719]: I0527 04:01:20.333628 2719 memory_manager.go:355] "RemoveStaleState removing state" podUID="cc40d681-1020-4117-8945-1be416a58bee" containerName="cilium-operator" May 27 04:01:20.346357 systemd[1]: Created slice kubepods-burstable-pod7d71781b_cf3f_4d19_95df_0b4740350d9c.slice - libcontainer container kubepods-burstable-pod7d71781b_cf3f_4d19_95df_0b4740350d9c.slice. May 27 04:01:20.360110 sshd[4884]: Connection closed by 139.178.68.195 port 44414 May 27 04:01:20.361902 sshd-session[4882]: pam_unix(sshd:session): session closed for user core May 27 04:01:20.365859 systemd[1]: sshd@54-172.234.212.30:22-139.178.68.195:44414.service: Deactivated successfully. May 27 04:01:20.370165 systemd[1]: session-55.scope: Deactivated successfully. May 27 04:01:20.371548 systemd-logind[1514]: Session 55 logged out. Waiting for processes to exit. May 27 04:01:20.375451 systemd-logind[1514]: Removed session 55. May 27 04:01:20.388130 kubelet[2719]: I0527 04:01:20.388088 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7d71781b-cf3f-4d19-95df-0b4740350d9c-cilium-cgroup\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388130 kubelet[2719]: I0527 04:01:20.388118 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7d71781b-cf3f-4d19-95df-0b4740350d9c-cni-path\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388213 kubelet[2719]: I0527 04:01:20.388137 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7d71781b-cf3f-4d19-95df-0b4740350d9c-clustermesh-secrets\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388213 kubelet[2719]: I0527 04:01:20.388154 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d71781b-cf3f-4d19-95df-0b4740350d9c-cilium-config-path\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388213 kubelet[2719]: I0527 04:01:20.388170 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7d71781b-cf3f-4d19-95df-0b4740350d9c-cilium-run\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388213 kubelet[2719]: I0527 04:01:20.388184 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7d71781b-cf3f-4d19-95df-0b4740350d9c-hubble-tls\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388213 kubelet[2719]: I0527 04:01:20.388197 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7d71781b-cf3f-4d19-95df-0b4740350d9c-hostproc\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388213 kubelet[2719]: I0527 04:01:20.388210 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d71781b-cf3f-4d19-95df-0b4740350d9c-etc-cni-netd\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388372 kubelet[2719]: I0527 04:01:20.388226 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d71781b-cf3f-4d19-95df-0b4740350d9c-lib-modules\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388372 kubelet[2719]: I0527 04:01:20.388241 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gldx4\" (UniqueName: \"kubernetes.io/projected/7d71781b-cf3f-4d19-95df-0b4740350d9c-kube-api-access-gldx4\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388372 kubelet[2719]: I0527 04:01:20.388256 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7d71781b-cf3f-4d19-95df-0b4740350d9c-bpf-maps\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388372 kubelet[2719]: I0527 04:01:20.388270 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7d71781b-cf3f-4d19-95df-0b4740350d9c-host-proc-sys-net\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388372 kubelet[2719]: I0527 04:01:20.388284 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7d71781b-cf3f-4d19-95df-0b4740350d9c-host-proc-sys-kernel\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388372 kubelet[2719]: I0527 04:01:20.388298 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d71781b-cf3f-4d19-95df-0b4740350d9c-xtables-lock\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.388545 kubelet[2719]: I0527 04:01:20.388313 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7d71781b-cf3f-4d19-95df-0b4740350d9c-cilium-ipsec-secrets\") pod \"cilium-7mvt7\" (UID: \"7d71781b-cf3f-4d19-95df-0b4740350d9c\") " pod="kube-system/cilium-7mvt7" May 27 04:01:20.423193 systemd[1]: Started sshd@55-172.234.212.30:22-139.178.68.195:44426.service - OpenSSH per-connection server daemon (139.178.68.195:44426). May 27 04:01:20.652791 kubelet[2719]: E0527 04:01:20.652213 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:20.654165 containerd[1541]: time="2025-05-27T04:01:20.654073271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7mvt7,Uid:7d71781b-cf3f-4d19-95df-0b4740350d9c,Namespace:kube-system,Attempt:0,}" May 27 04:01:20.670479 containerd[1541]: time="2025-05-27T04:01:20.670415691Z" level=info msg="connecting to shim c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5" address="unix:///run/containerd/s/045194d261ee486e2d5c0fe816dd59fbeb8f69abd907b5f112c1b1439cf4b4b7" namespace=k8s.io protocol=ttrpc version=3 May 27 04:01:20.700004 systemd[1]: Started cri-containerd-c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5.scope - libcontainer container c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5. May 27 04:01:20.731235 containerd[1541]: time="2025-05-27T04:01:20.731155321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7mvt7,Uid:7d71781b-cf3f-4d19-95df-0b4740350d9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5\"" May 27 04:01:20.732102 kubelet[2719]: E0527 04:01:20.732061 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:20.736177 containerd[1541]: time="2025-05-27T04:01:20.736143793Z" level=info msg="CreateContainer within sandbox \"c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 04:01:20.741285 containerd[1541]: time="2025-05-27T04:01:20.741264287Z" level=info msg="Container 0194a3550ad0a0e6151b65fe760d92f062ee0f22013e590f798d88c5431622be: CDI devices from CRI Config.CDIDevices: []" May 27 04:01:20.745976 containerd[1541]: time="2025-05-27T04:01:20.745952150Z" level=info msg="CreateContainer within sandbox \"c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0194a3550ad0a0e6151b65fe760d92f062ee0f22013e590f798d88c5431622be\"" May 27 04:01:20.746457 containerd[1541]: time="2025-05-27T04:01:20.746393172Z" level=info msg="StartContainer for \"0194a3550ad0a0e6151b65fe760d92f062ee0f22013e590f798d88c5431622be\"" May 27 04:01:20.747598 containerd[1541]: time="2025-05-27T04:01:20.747573143Z" level=info msg="connecting to shim 0194a3550ad0a0e6151b65fe760d92f062ee0f22013e590f798d88c5431622be" address="unix:///run/containerd/s/045194d261ee486e2d5c0fe816dd59fbeb8f69abd907b5f112c1b1439cf4b4b7" protocol=ttrpc version=3 May 27 04:01:20.759852 sshd[4895]: Accepted publickey for core from 139.178.68.195 port 44426 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:01:20.762097 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:01:20.771005 systemd[1]: Started cri-containerd-0194a3550ad0a0e6151b65fe760d92f062ee0f22013e590f798d88c5431622be.scope - libcontainer container 0194a3550ad0a0e6151b65fe760d92f062ee0f22013e590f798d88c5431622be. May 27 04:01:20.774380 systemd-logind[1514]: New session 56 of user core. May 27 04:01:20.781025 systemd[1]: Started session-56.scope - Session 56 of User core. May 27 04:01:20.810783 containerd[1541]: time="2025-05-27T04:01:20.810744258Z" level=info msg="StartContainer for \"0194a3550ad0a0e6151b65fe760d92f062ee0f22013e590f798d88c5431622be\" returns successfully" May 27 04:01:20.821859 systemd[1]: cri-containerd-0194a3550ad0a0e6151b65fe760d92f062ee0f22013e590f798d88c5431622be.scope: Deactivated successfully. May 27 04:01:20.823821 containerd[1541]: time="2025-05-27T04:01:20.823793661Z" level=info msg="received exit event container_id:\"0194a3550ad0a0e6151b65fe760d92f062ee0f22013e590f798d88c5431622be\" id:\"0194a3550ad0a0e6151b65fe760d92f062ee0f22013e590f798d88c5431622be\" pid:4958 exited_at:{seconds:1748318480 nanos:823448382}" May 27 04:01:20.824129 containerd[1541]: time="2025-05-27T04:01:20.824107799Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0194a3550ad0a0e6151b65fe760d92f062ee0f22013e590f798d88c5431622be\" id:\"0194a3550ad0a0e6151b65fe760d92f062ee0f22013e590f798d88c5431622be\" pid:4958 exited_at:{seconds:1748318480 nanos:823448382}" May 27 04:01:20.891841 kubelet[2719]: E0527 04:01:20.891811 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:20.897612 containerd[1541]: time="2025-05-27T04:01:20.897544664Z" level=info msg="CreateContainer within sandbox \"c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 04:01:20.904489 containerd[1541]: time="2025-05-27T04:01:20.904414505Z" level=info msg="Container cd59d67f9c77305b85fd43762a1eee9fc85a8e59c50094c63dfe903b5816791a: CDI devices from CRI Config.CDIDevices: []" May 27 04:01:20.911278 containerd[1541]: time="2025-05-27T04:01:20.911197753Z" level=info msg="CreateContainer within sandbox \"c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cd59d67f9c77305b85fd43762a1eee9fc85a8e59c50094c63dfe903b5816791a\"" May 27 04:01:20.912070 containerd[1541]: time="2025-05-27T04:01:20.912015625Z" level=info msg="StartContainer for \"cd59d67f9c77305b85fd43762a1eee9fc85a8e59c50094c63dfe903b5816791a\"" May 27 04:01:20.913128 containerd[1541]: time="2025-05-27T04:01:20.913053462Z" level=info msg="connecting to shim cd59d67f9c77305b85fd43762a1eee9fc85a8e59c50094c63dfe903b5816791a" address="unix:///run/containerd/s/045194d261ee486e2d5c0fe816dd59fbeb8f69abd907b5f112c1b1439cf4b4b7" protocol=ttrpc version=3 May 27 04:01:20.941005 systemd[1]: Started cri-containerd-cd59d67f9c77305b85fd43762a1eee9fc85a8e59c50094c63dfe903b5816791a.scope - libcontainer container cd59d67f9c77305b85fd43762a1eee9fc85a8e59c50094c63dfe903b5816791a. May 27 04:01:20.973630 containerd[1541]: time="2025-05-27T04:01:20.973597937Z" level=info msg="StartContainer for \"cd59d67f9c77305b85fd43762a1eee9fc85a8e59c50094c63dfe903b5816791a\" returns successfully" May 27 04:01:20.984186 systemd[1]: cri-containerd-cd59d67f9c77305b85fd43762a1eee9fc85a8e59c50094c63dfe903b5816791a.scope: Deactivated successfully. May 27 04:01:20.985629 containerd[1541]: time="2025-05-27T04:01:20.985601913Z" level=info msg="received exit event container_id:\"cd59d67f9c77305b85fd43762a1eee9fc85a8e59c50094c63dfe903b5816791a\" id:\"cd59d67f9c77305b85fd43762a1eee9fc85a8e59c50094c63dfe903b5816791a\" pid:5004 exited_at:{seconds:1748318480 nanos:984510305}" May 27 04:01:20.985868 containerd[1541]: time="2025-05-27T04:01:20.985708526Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd59d67f9c77305b85fd43762a1eee9fc85a8e59c50094c63dfe903b5816791a\" id:\"cd59d67f9c77305b85fd43762a1eee9fc85a8e59c50094c63dfe903b5816791a\" pid:5004 exited_at:{seconds:1748318480 nanos:984510305}" May 27 04:01:21.001542 sshd[4963]: Connection closed by 139.178.68.195 port 44426 May 27 04:01:21.002234 sshd-session[4895]: pam_unix(sshd:session): session closed for user core May 27 04:01:21.008171 systemd-logind[1514]: Session 56 logged out. Waiting for processes to exit. May 27 04:01:21.009038 systemd[1]: sshd@55-172.234.212.30:22-139.178.68.195:44426.service: Deactivated successfully. May 27 04:01:21.011553 systemd[1]: session-56.scope: Deactivated successfully. May 27 04:01:21.015211 systemd-logind[1514]: Removed session 56. May 27 04:01:21.061678 systemd[1]: Started sshd@56-172.234.212.30:22-139.178.68.195:44428.service - OpenSSH per-connection server daemon (139.178.68.195:44428). May 27 04:01:21.403903 sshd[5044]: Accepted publickey for core from 139.178.68.195 port 44428 ssh2: RSA SHA256:nwL9/grStHcUSnt/HUvv/cLaJF1H4IH344omFh5bv+o May 27 04:01:21.405456 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:01:21.411529 systemd-logind[1514]: New session 57 of user core. May 27 04:01:21.417190 systemd[1]: Started session-57.scope - Session 57 of User core. May 27 04:01:21.895214 kubelet[2719]: E0527 04:01:21.895187 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:21.897789 containerd[1541]: time="2025-05-27T04:01:21.897715330Z" level=info msg="CreateContainer within sandbox \"c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 04:01:21.913906 containerd[1541]: time="2025-05-27T04:01:21.913534638Z" level=info msg="Container 3febb9c7fa5d0807953b0ac19bfec55c73fbb14d05f77c302780431404007de8: CDI devices from CRI Config.CDIDevices: []" May 27 04:01:21.918662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617629199.mount: Deactivated successfully. May 27 04:01:21.930663 containerd[1541]: time="2025-05-27T04:01:21.930559148Z" level=info msg="CreateContainer within sandbox \"c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3febb9c7fa5d0807953b0ac19bfec55c73fbb14d05f77c302780431404007de8\"" May 27 04:01:21.933304 containerd[1541]: time="2025-05-27T04:01:21.933180238Z" level=info msg="StartContainer for \"3febb9c7fa5d0807953b0ac19bfec55c73fbb14d05f77c302780431404007de8\"" May 27 04:01:21.935578 containerd[1541]: time="2025-05-27T04:01:21.934974085Z" level=info msg="connecting to shim 3febb9c7fa5d0807953b0ac19bfec55c73fbb14d05f77c302780431404007de8" address="unix:///run/containerd/s/045194d261ee486e2d5c0fe816dd59fbeb8f69abd907b5f112c1b1439cf4b4b7" protocol=ttrpc version=3 May 27 04:01:21.967009 systemd[1]: Started cri-containerd-3febb9c7fa5d0807953b0ac19bfec55c73fbb14d05f77c302780431404007de8.scope - libcontainer container 3febb9c7fa5d0807953b0ac19bfec55c73fbb14d05f77c302780431404007de8. May 27 04:01:22.009015 containerd[1541]: time="2025-05-27T04:01:22.008979763Z" level=info msg="StartContainer for \"3febb9c7fa5d0807953b0ac19bfec55c73fbb14d05f77c302780431404007de8\" returns successfully" May 27 04:01:22.013307 systemd[1]: cri-containerd-3febb9c7fa5d0807953b0ac19bfec55c73fbb14d05f77c302780431404007de8.scope: Deactivated successfully. May 27 04:01:22.014717 containerd[1541]: time="2025-05-27T04:01:22.014599051Z" level=info msg="received exit event container_id:\"3febb9c7fa5d0807953b0ac19bfec55c73fbb14d05f77c302780431404007de8\" id:\"3febb9c7fa5d0807953b0ac19bfec55c73fbb14d05f77c302780431404007de8\" pid:5064 exited_at:{seconds:1748318482 nanos:14436177}" May 27 04:01:22.015247 containerd[1541]: time="2025-05-27T04:01:22.015215948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3febb9c7fa5d0807953b0ac19bfec55c73fbb14d05f77c302780431404007de8\" id:\"3febb9c7fa5d0807953b0ac19bfec55c73fbb14d05f77c302780431404007de8\" pid:5064 exited_at:{seconds:1748318482 nanos:14436177}" May 27 04:01:22.037270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3febb9c7fa5d0807953b0ac19bfec55c73fbb14d05f77c302780431404007de8-rootfs.mount: Deactivated successfully. May 27 04:01:22.427706 kubelet[2719]: I0527 04:01:22.427647 2719 setters.go:602] "Node became not ready" node="172-234-212-30" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T04:01:22Z","lastTransitionTime":"2025-05-27T04:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 04:01:22.899750 kubelet[2719]: E0527 04:01:22.899630 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:22.903372 containerd[1541]: time="2025-05-27T04:01:22.903315517Z" level=info msg="CreateContainer within sandbox \"c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 04:01:22.919187 containerd[1541]: time="2025-05-27T04:01:22.918950222Z" level=info msg="Container fa3af35f88448f764309f2d3ef80666b3c638b3991c9c20e48c9ae52f7135177: CDI devices from CRI Config.CDIDevices: []" May 27 04:01:22.922728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3121539968.mount: Deactivated successfully. May 27 04:01:22.929060 containerd[1541]: time="2025-05-27T04:01:22.929005339Z" level=info msg="CreateContainer within sandbox \"c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fa3af35f88448f764309f2d3ef80666b3c638b3991c9c20e48c9ae52f7135177\"" May 27 04:01:22.929735 containerd[1541]: time="2025-05-27T04:01:22.929659197Z" level=info msg="StartContainer for \"fa3af35f88448f764309f2d3ef80666b3c638b3991c9c20e48c9ae52f7135177\"" May 27 04:01:22.930697 containerd[1541]: time="2025-05-27T04:01:22.930659273Z" level=info msg="connecting to shim fa3af35f88448f764309f2d3ef80666b3c638b3991c9c20e48c9ae52f7135177" address="unix:///run/containerd/s/045194d261ee486e2d5c0fe816dd59fbeb8f69abd907b5f112c1b1439cf4b4b7" protocol=ttrpc version=3 May 27 04:01:22.951117 systemd[1]: Started cri-containerd-fa3af35f88448f764309f2d3ef80666b3c638b3991c9c20e48c9ae52f7135177.scope - libcontainer container fa3af35f88448f764309f2d3ef80666b3c638b3991c9c20e48c9ae52f7135177. May 27 04:01:22.986413 systemd[1]: cri-containerd-fa3af35f88448f764309f2d3ef80666b3c638b3991c9c20e48c9ae52f7135177.scope: Deactivated successfully. May 27 04:01:22.989016 containerd[1541]: time="2025-05-27T04:01:22.988982950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa3af35f88448f764309f2d3ef80666b3c638b3991c9c20e48c9ae52f7135177\" id:\"fa3af35f88448f764309f2d3ef80666b3c638b3991c9c20e48c9ae52f7135177\" pid:5104 exited_at:{seconds:1748318482 nanos:987396758}" May 27 04:01:22.989085 containerd[1541]: time="2025-05-27T04:01:22.989039142Z" level=info msg="received exit event container_id:\"fa3af35f88448f764309f2d3ef80666b3c638b3991c9c20e48c9ae52f7135177\" id:\"fa3af35f88448f764309f2d3ef80666b3c638b3991c9c20e48c9ae52f7135177\" pid:5104 exited_at:{seconds:1748318482 nanos:987396758}" May 27 04:01:22.989979 containerd[1541]: time="2025-05-27T04:01:22.989947877Z" level=info msg="StartContainer for \"fa3af35f88448f764309f2d3ef80666b3c638b3991c9c20e48c9ae52f7135177\" returns successfully" May 27 04:01:23.015603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa3af35f88448f764309f2d3ef80666b3c638b3991c9c20e48c9ae52f7135177-rootfs.mount: Deactivated successfully. May 27 04:01:23.904531 kubelet[2719]: E0527 04:01:23.904459 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:23.907607 containerd[1541]: time="2025-05-27T04:01:23.907576967Z" level=info msg="CreateContainer within sandbox \"c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 04:01:23.926392 containerd[1541]: time="2025-05-27T04:01:23.925946106Z" level=info msg="Container 2aa0faebf0044b64e5b2a290454ce5f8b30697e6b28afda0c97284c70ad59b5e: CDI devices from CRI Config.CDIDevices: []" May 27 04:01:23.928746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2827190424.mount: Deactivated successfully. May 27 04:01:23.936788 containerd[1541]: time="2025-05-27T04:01:23.936740224Z" level=info msg="CreateContainer within sandbox \"c47789d604ef10010742152c4ea0171dc27c9e3799c8368567ce5a15476122e5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2aa0faebf0044b64e5b2a290454ce5f8b30697e6b28afda0c97284c70ad59b5e\"" May 27 04:01:23.937339 containerd[1541]: time="2025-05-27T04:01:23.937280188Z" level=info msg="StartContainer for \"2aa0faebf0044b64e5b2a290454ce5f8b30697e6b28afda0c97284c70ad59b5e\"" May 27 04:01:23.938063 containerd[1541]: time="2025-05-27T04:01:23.938027298Z" level=info msg="connecting to shim 2aa0faebf0044b64e5b2a290454ce5f8b30697e6b28afda0c97284c70ad59b5e" address="unix:///run/containerd/s/045194d261ee486e2d5c0fe816dd59fbeb8f69abd907b5f112c1b1439cf4b4b7" protocol=ttrpc version=3 May 27 04:01:23.963019 systemd[1]: Started cri-containerd-2aa0faebf0044b64e5b2a290454ce5f8b30697e6b28afda0c97284c70ad59b5e.scope - libcontainer container 2aa0faebf0044b64e5b2a290454ce5f8b30697e6b28afda0c97284c70ad59b5e. May 27 04:01:24.001027 containerd[1541]: time="2025-05-27T04:01:24.000735288Z" level=info msg="StartContainer for \"2aa0faebf0044b64e5b2a290454ce5f8b30697e6b28afda0c97284c70ad59b5e\" returns successfully" May 27 04:01:24.084906 containerd[1541]: time="2025-05-27T04:01:24.084817156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aa0faebf0044b64e5b2a290454ce5f8b30697e6b28afda0c97284c70ad59b5e\" id:\"25e9cc058be90e5d23dbe18ce558f111f454c11c49c4223e01f76f8851f8182d\" pid:5172 exited_at:{seconds:1748318484 nanos:84371073}" May 27 04:01:24.444936 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 27 04:01:24.910421 kubelet[2719]: E0527 04:01:24.910317 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:25.101370 containerd[1541]: time="2025-05-27T04:01:25.100812882Z" level=info msg="StopPodSandbox for \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\"" May 27 04:01:25.101370 containerd[1541]: time="2025-05-27T04:01:25.101137881Z" level=info msg="TearDown network for sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" successfully" May 27 04:01:25.101370 containerd[1541]: time="2025-05-27T04:01:25.101148651Z" level=info msg="StopPodSandbox for \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" returns successfully" May 27 04:01:25.101994 containerd[1541]: time="2025-05-27T04:01:25.101488371Z" level=info msg="RemovePodSandbox for \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\"" May 27 04:01:25.101994 containerd[1541]: time="2025-05-27T04:01:25.101519022Z" level=info msg="Forcibly stopping sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\"" May 27 04:01:25.101994 containerd[1541]: time="2025-05-27T04:01:25.101607764Z" level=info msg="TearDown network for sandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" successfully" May 27 04:01:25.103418 containerd[1541]: time="2025-05-27T04:01:25.103395902Z" level=info msg="Ensure that sandbox 77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0 in task-service has been cleanup successfully" May 27 04:01:25.105439 containerd[1541]: time="2025-05-27T04:01:25.105405236Z" level=info msg="RemovePodSandbox \"77f3c77ac0fdfde0c70714a249d57045f0f900847a752abb11eebdc3aad5f9e0\" returns successfully" May 27 04:01:25.106167 containerd[1541]: time="2025-05-27T04:01:25.105793206Z" level=info msg="StopPodSandbox for \"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\"" May 27 04:01:25.106287 containerd[1541]: time="2025-05-27T04:01:25.106264859Z" level=info msg="TearDown network for sandbox \"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\" successfully" May 27 04:01:25.106319 containerd[1541]: time="2025-05-27T04:01:25.106281359Z" level=info msg="StopPodSandbox for \"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\" returns successfully" May 27 04:01:25.106578 containerd[1541]: time="2025-05-27T04:01:25.106556626Z" level=info msg="RemovePodSandbox for \"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\"" May 27 04:01:25.106613 containerd[1541]: time="2025-05-27T04:01:25.106579757Z" level=info msg="Forcibly stopping sandbox \"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\"" May 27 04:01:25.106915 containerd[1541]: time="2025-05-27T04:01:25.106656799Z" level=info msg="TearDown network for sandbox \"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\" successfully" May 27 04:01:25.108338 containerd[1541]: time="2025-05-27T04:01:25.108314394Z" level=info msg="Ensure that sandbox 68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2 in task-service has been cleanup successfully" May 27 04:01:25.110278 containerd[1541]: time="2025-05-27T04:01:25.110240595Z" level=info msg="RemovePodSandbox \"68c1b074eb9d5d1ea8b5b97d53cee8b021642a289e91e25e8dc8530c5afc2da2\" returns successfully" May 27 04:01:25.955192 containerd[1541]: time="2025-05-27T04:01:25.954581853Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aa0faebf0044b64e5b2a290454ce5f8b30697e6b28afda0c97284c70ad59b5e\" id:\"9c2dbce60d113c8b333621a734e9a9df2d824c81dd161ae38a9361468f32a2f7\" pid:5254 exit_status:1 exited_at:{seconds:1748318485 nanos:954329786}" May 27 04:01:26.654235 kubelet[2719]: E0527 04:01:26.654190 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:27.352798 systemd-networkd[1465]: lxc_health: Link UP May 27 04:01:27.355650 systemd-networkd[1465]: lxc_health: Gained carrier May 27 04:01:28.087848 containerd[1541]: time="2025-05-27T04:01:28.087807280Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aa0faebf0044b64e5b2a290454ce5f8b30697e6b28afda0c97284c70ad59b5e\" id:\"2e050922472593bec77ebc16d00689221a1a3bb87ddf96c9755a1fe3425c34b5\" pid:5698 exited_at:{seconds:1748318488 nanos:87516712}" May 27 04:01:28.092281 kubelet[2719]: E0527 04:01:28.092153 2719 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45282->127.0.0.1:41393: write tcp 127.0.0.1:45282->127.0.0.1:41393: write: connection reset by peer May 27 04:01:28.658600 kubelet[2719]: E0527 04:01:28.658303 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:28.675807 kubelet[2719]: I0527 04:01:28.675727 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7mvt7" podStartSLOduration=8.675693558 podStartE2EDuration="8.675693558s" podCreationTimestamp="2025-05-27 04:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 04:01:24.927719757 +0000 UTC m=+359.939371480" watchObservedRunningTime="2025-05-27 04:01:28.675693558 +0000 UTC m=+363.687345271" May 27 04:01:28.920184 kubelet[2719]: E0527 04:01:28.919636 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:28.980036 systemd-networkd[1465]: lxc_health: Gained IPv6LL May 27 04:01:29.923072 kubelet[2719]: E0527 04:01:29.922702 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" May 27 04:01:30.222517 containerd[1541]: time="2025-05-27T04:01:30.222420321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aa0faebf0044b64e5b2a290454ce5f8b30697e6b28afda0c97284c70ad59b5e\" id:\"b69985da4403bdba18f235f47f8a2671d97d80f35319ada92420e15ae0b3db42\" pid:5732 exited_at:{seconds:1748318490 nanos:221471685}" May 27 04:01:32.324606 containerd[1541]: time="2025-05-27T04:01:32.324570631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aa0faebf0044b64e5b2a290454ce5f8b30697e6b28afda0c97284c70ad59b5e\" id:\"f8e36244f631e9e37dd74dc0d7ccd55c99fd1081b1dba92f1a9e9d2492356a81\" pid:5762 exited_at:{seconds:1748318492 nanos:324169519}" May 27 04:01:34.417528 containerd[1541]: time="2025-05-27T04:01:34.417480741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aa0faebf0044b64e5b2a290454ce5f8b30697e6b28afda0c97284c70ad59b5e\" id:\"474de6c7c601743e2c9f22f9114bd9b406415a9d524e7f5243f7332d8971dbfc\" pid:5792 exited_at:{seconds:1748318494 nanos:416473414}" May 27 04:01:34.474776 sshd[5046]: Connection closed by 139.178.68.195 port 44428 May 27 04:01:34.475489 sshd-session[5044]: pam_unix(sshd:session): session closed for user core May 27 04:01:34.481144 systemd-logind[1514]: Session 57 logged out. Waiting for processes to exit. May 27 04:01:34.481438 systemd[1]: sshd@56-172.234.212.30:22-139.178.68.195:44428.service: Deactivated successfully. May 27 04:01:34.483525 systemd[1]: session-57.scope: Deactivated successfully. May 27 04:01:34.485838 systemd-logind[1514]: Removed session 57.