May 15 00:01:21.097708 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 14 22:19:37 -00 2025 May 15 00:01:21.097760 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=23d816f2beca10c7a75ccdd203c170f89f29125f08ff6f3fdf90f8fa61b342cc May 15 00:01:21.097771 kernel: BIOS-provided physical RAM map: May 15 00:01:21.097777 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 15 00:01:21.097783 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 15 00:01:21.097796 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 00:01:21.097804 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 15 00:01:21.097810 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 15 00:01:21.097817 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 00:01:21.097824 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 15 00:01:21.097831 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 00:01:21.097837 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 00:01:21.097844 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 15 00:01:21.097851 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 00:01:21.097862 kernel: NX (Execute Disable) protection: active May 15 00:01:21.097869 kernel: APIC: Static calls initialized May 15 00:01:21.097876 kernel: SMBIOS 2.8 present. May 15 00:01:21.097883 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 15 00:01:21.097890 kernel: Hypervisor detected: KVM May 15 00:01:21.097900 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 00:01:21.097907 kernel: kvm-clock: using sched offset of 4910644810 cycles May 15 00:01:21.097914 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 00:01:21.097922 kernel: tsc: Detected 2000.000 MHz processor May 15 00:01:21.097930 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 00:01:21.097938 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 00:01:21.097945 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 15 00:01:21.097953 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 00:01:21.097961 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 00:01:21.097970 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 15 00:01:21.097977 kernel: Using GB pages for direct mapping May 15 00:01:21.097985 kernel: ACPI: Early table checksum verification disabled May 15 00:01:21.097992 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 15 00:01:21.097999 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:01:21.098007 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:01:21.098014 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:01:21.098022 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 15 00:01:21.098029 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:01:21.098039 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:01:21.098046 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:01:21.098054 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:01:21.098065 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 15 00:01:21.098073 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 15 00:01:21.098081 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 15 00:01:21.098091 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 15 00:01:21.098099 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 15 00:01:21.098106 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 15 00:01:21.098114 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 15 00:01:21.098122 kernel: No NUMA configuration found May 15 00:01:21.098130 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 15 00:01:21.098138 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] May 15 00:01:21.098146 kernel: Zone ranges: May 15 00:01:21.098155 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 00:01:21.098165 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 15 00:01:21.098173 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 15 00:01:21.098182 kernel: Movable zone start for each node May 15 00:01:21.098190 kernel: Early memory node ranges May 15 00:01:21.098198 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 00:01:21.098206 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 15 00:01:21.098214 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 15 00:01:21.098222 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 15 00:01:21.098230 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 00:01:21.098241 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 00:01:21.098249 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 15 00:01:21.098281 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 00:01:21.098290 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 00:01:21.098298 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 00:01:21.098306 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 00:01:21.098314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 00:01:21.098323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 00:01:21.098331 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 00:01:21.098342 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 00:01:21.098350 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 00:01:21.098358 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 00:01:21.098367 kernel: TSC deadline timer available May 15 00:01:21.098375 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 15 00:01:21.098383 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 00:01:21.098391 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 00:01:21.098399 kernel: kvm-guest: setup PV sched yield May 15 00:01:21.098407 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 15 00:01:21.098417 kernel: Booting paravirtualized kernel on KVM May 15 00:01:21.098426 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 00:01:21.098434 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 00:01:21.098442 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 15 00:01:21.098450 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 15 00:01:21.098458 kernel: pcpu-alloc: [0] 0 1 May 15 00:01:21.098466 kernel: kvm-guest: PV spinlocks enabled May 15 00:01:21.098474 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 00:01:21.098484 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=23d816f2beca10c7a75ccdd203c170f89f29125f08ff6f3fdf90f8fa61b342cc May 15 00:01:21.098495 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:01:21.098503 kernel: random: crng init done May 15 00:01:21.098511 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:01:21.098519 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:01:21.098527 kernel: Fallback order for Node 0: 0 May 15 00:01:21.098536 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 15 00:01:21.098544 kernel: Policy zone: Normal May 15 00:01:21.098552 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:01:21.098563 kernel: software IO TLB: area num 2. May 15 00:01:21.098572 kernel: Memory: 3964156K/4193772K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 229356K reserved, 0K cma-reserved) May 15 00:01:21.098580 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 00:01:21.098588 kernel: ftrace: allocating 37918 entries in 149 pages May 15 00:01:21.098596 kernel: ftrace: allocated 149 pages with 4 groups May 15 00:01:21.098604 kernel: Dynamic Preempt: voluntary May 15 00:01:21.098612 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 00:01:21.098623 kernel: rcu: RCU event tracing is enabled. May 15 00:01:21.098631 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 00:01:21.098642 kernel: Trampoline variant of Tasks RCU enabled. May 15 00:01:21.098650 kernel: Rude variant of Tasks RCU enabled. May 15 00:01:21.098658 kernel: Tracing variant of Tasks RCU enabled. May 15 00:01:21.098666 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:01:21.098674 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 00:01:21.098682 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 00:01:21.098690 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 00:01:21.098698 kernel: Console: colour VGA+ 80x25 May 15 00:01:21.098706 kernel: printk: console [tty0] enabled May 15 00:01:21.098714 kernel: printk: console [ttyS0] enabled May 15 00:01:21.098725 kernel: ACPI: Core revision 20230628 May 15 00:01:21.098733 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 00:01:21.098742 kernel: APIC: Switch to symmetric I/O mode setup May 15 00:01:21.098757 kernel: x2apic enabled May 15 00:01:21.098768 kernel: APIC: Switched APIC routing to: physical x2apic May 15 00:01:21.098777 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 00:01:21.098786 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 00:01:21.098794 kernel: kvm-guest: setup PV IPIs May 15 00:01:21.098803 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 00:01:21.098812 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 00:01:21.098820 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) May 15 00:01:21.098829 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 00:01:21.098840 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 00:01:21.098848 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 00:01:21.098857 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 00:01:21.098865 kernel: Spectre V2 : Mitigation: Retpolines May 15 00:01:21.098876 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 00:01:21.098885 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 15 00:01:21.098894 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 00:01:21.098903 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 00:01:21.098911 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 00:01:21.098920 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 00:01:21.098929 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 00:01:21.098938 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 00:01:21.098946 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 00:01:21.098957 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 00:01:21.098966 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 15 00:01:21.098975 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 00:01:21.098983 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 15 00:01:21.098992 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 15 00:01:21.099000 kernel: Freeing SMP alternatives memory: 32K May 15 00:01:21.099009 kernel: pid_max: default: 32768 minimum: 301 May 15 00:01:21.099017 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 00:01:21.099028 kernel: landlock: Up and running. May 15 00:01:21.099037 kernel: SELinux: Initializing. May 15 00:01:21.099045 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:01:21.099054 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:01:21.099063 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 15 00:01:21.099071 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 00:01:21.099080 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 00:01:21.099089 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 00:01:21.099097 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 00:01:21.099108 kernel: ... version: 0 May 15 00:01:21.099116 kernel: ... bit width: 48 May 15 00:01:21.099124 kernel: ... generic registers: 6 May 15 00:01:21.099131 kernel: ... value mask: 0000ffffffffffff May 15 00:01:21.099138 kernel: ... max period: 00007fffffffffff May 15 00:01:21.099145 kernel: ... fixed-purpose events: 0 May 15 00:01:21.099152 kernel: ... event mask: 000000000000003f May 15 00:01:21.099159 kernel: signal: max sigframe size: 3376 May 15 00:01:21.099166 kernel: rcu: Hierarchical SRCU implementation. May 15 00:01:21.099174 kernel: rcu: Max phase no-delay instances is 400. May 15 00:01:21.099183 kernel: smp: Bringing up secondary CPUs ... May 15 00:01:21.099190 kernel: smpboot: x86: Booting SMP configuration: May 15 00:01:21.099197 kernel: .... node #0, CPUs: #1 May 15 00:01:21.099204 kernel: smp: Brought up 1 node, 2 CPUs May 15 00:01:21.099212 kernel: smpboot: Max logical packages: 1 May 15 00:01:21.099219 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 15 00:01:21.099226 kernel: devtmpfs: initialized May 15 00:01:21.099233 kernel: x86/mm: Memory block size: 128MB May 15 00:01:21.099240 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:01:21.099250 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 00:01:21.101308 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:01:21.101318 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:01:21.101326 kernel: audit: initializing netlink subsys (disabled) May 15 00:01:21.101333 kernel: audit: type=2000 audit(1747267280.205:1): state=initialized audit_enabled=0 res=1 May 15 00:01:21.101341 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:01:21.101545 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 00:01:21.101553 kernel: cpuidle: using governor menu May 15 00:01:21.101560 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:01:21.101571 kernel: dca service started, version 1.12.1 May 15 00:01:21.101579 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 15 00:01:21.101586 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 15 00:01:21.101594 kernel: PCI: Using configuration type 1 for base access May 15 00:01:21.101601 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 00:01:21.101609 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:01:21.101616 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 00:01:21.101623 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:01:21.101631 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 00:01:21.101640 kernel: ACPI: Added _OSI(Module Device) May 15 00:01:21.101648 kernel: ACPI: Added _OSI(Processor Device) May 15 00:01:21.101655 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:01:21.101662 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:01:21.101786 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:01:21.101816 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 15 00:01:21.101826 kernel: ACPI: Interpreter enabled May 15 00:01:21.101835 kernel: ACPI: PM: (supports S0 S3 S5) May 15 00:01:21.101843 kernel: ACPI: Using IOAPIC for interrupt routing May 15 00:01:21.101868 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 00:01:21.101876 kernel: PCI: Using E820 reservations for host bridge windows May 15 00:01:21.101885 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 00:01:21.101893 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:01:21.102178 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:01:21.104347 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 00:01:21.104478 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 00:01:21.104494 kernel: PCI host bridge to bus 0000:00 May 15 00:01:21.104624 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 00:01:21.104734 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 00:01:21.104842 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 00:01:21.104946 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 15 00:01:21.105051 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 00:01:21.105154 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 15 00:01:21.105462 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:01:21.105603 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 15 00:01:21.105743 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 15 00:01:21.105860 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 15 00:01:21.105974 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 15 00:01:21.106089 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 15 00:01:21.106203 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 00:01:21.106541 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 May 15 00:01:21.106660 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] May 15 00:01:21.106775 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 15 00:01:21.106925 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 15 00:01:21.107057 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 15 00:01:21.107173 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 15 00:01:21.109087 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 15 00:01:21.109213 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 15 00:01:21.109349 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 15 00:01:21.109475 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 15 00:01:21.109592 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 00:01:21.109715 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 15 00:01:21.109829 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] May 15 00:01:21.109949 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] May 15 00:01:21.110070 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 15 00:01:21.110397 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 15 00:01:21.110411 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 00:01:21.110420 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 00:01:21.110429 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 00:01:21.110437 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 00:01:21.110445 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 00:01:21.110458 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 00:01:21.110466 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 00:01:21.110603 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 00:01:21.110641 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 00:01:21.110653 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 00:01:21.110663 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 00:01:21.110673 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 00:01:21.110683 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 00:01:21.110693 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 00:01:21.110723 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 00:01:21.110733 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 00:01:21.110743 kernel: iommu: Default domain type: Translated May 15 00:01:21.110753 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 00:01:21.110764 kernel: PCI: Using ACPI for IRQ routing May 15 00:01:21.110774 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 00:01:21.110785 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 15 00:01:21.110795 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 15 00:01:21.111112 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 00:01:21.111246 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 00:01:21.111868 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 00:01:21.111883 kernel: vgaarb: loaded May 15 00:01:21.111892 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 00:01:21.111900 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 00:01:21.111909 kernel: clocksource: Switched to clocksource kvm-clock May 15 00:01:21.111917 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:01:21.111926 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:01:21.111940 kernel: pnp: PnP ACPI init May 15 00:01:21.112081 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 00:01:21.112096 kernel: pnp: PnP ACPI: found 5 devices May 15 00:01:21.112105 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 00:01:21.112114 kernel: NET: Registered PF_INET protocol family May 15 00:01:21.112122 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:01:21.112130 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:01:21.112139 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:01:21.112152 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:01:21.112160 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 00:01:21.112169 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:01:21.112177 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:01:21.112185 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:01:21.112193 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:01:21.112202 kernel: NET: Registered PF_XDP protocol family May 15 00:01:21.112530 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 00:01:21.112641 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 00:01:21.112754 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 00:01:21.112859 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 15 00:01:21.112965 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 00:01:21.113069 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 15 00:01:21.113080 kernel: PCI: CLS 0 bytes, default 64 May 15 00:01:21.113089 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 15 00:01:21.113097 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 15 00:01:21.113106 kernel: Initialise system trusted keyrings May 15 00:01:21.113117 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:01:21.113126 kernel: Key type asymmetric registered May 15 00:01:21.113134 kernel: Asymmetric key parser 'x509' registered May 15 00:01:21.113142 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 15 00:01:21.113150 kernel: io scheduler mq-deadline registered May 15 00:01:21.113157 kernel: io scheduler kyber registered May 15 00:01:21.113165 kernel: io scheduler bfq registered May 15 00:01:21.113172 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 00:01:21.113182 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 00:01:21.113192 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 00:01:21.113200 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:01:21.113208 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 00:01:21.113216 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 00:01:21.113223 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 00:01:21.113231 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 00:01:21.113379 kernel: rtc_cmos 00:03: RTC can wake from S4 May 15 00:01:21.113392 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 00:01:21.113501 kernel: rtc_cmos 00:03: registered as rtc0 May 15 00:01:21.113797 kernel: rtc_cmos 00:03: setting system clock to 2025-05-15T00:01:20 UTC (1747267280) May 15 00:01:21.113905 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 00:01:21.113915 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 00:01:21.113923 kernel: NET: Registered PF_INET6 protocol family May 15 00:01:21.113931 kernel: Segment Routing with IPv6 May 15 00:01:21.113939 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:01:21.113947 kernel: NET: Registered PF_PACKET protocol family May 15 00:01:21.113955 kernel: Key type dns_resolver registered May 15 00:01:21.113966 kernel: IPI shorthand broadcast: enabled May 15 00:01:21.113974 kernel: sched_clock: Marking stable (827086980, 220938800)->(1117190840, -69165060) May 15 00:01:21.113981 kernel: registered taskstats version 1 May 15 00:01:21.113989 kernel: Loading compiled-in X.509 certificates May 15 00:01:21.113997 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: e21d6dc0691a7e1e8bef90d9217bc8c09d6860f3' May 15 00:01:21.114005 kernel: Key type .fscrypt registered May 15 00:01:21.114013 kernel: Key type fscrypt-provisioning registered May 15 00:01:21.114021 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:01:21.114029 kernel: ima: Allocated hash algorithm: sha1 May 15 00:01:21.114038 kernel: ima: No architecture policies found May 15 00:01:21.114046 kernel: clk: Disabling unused clocks May 15 00:01:21.114054 kernel: Freeing unused kernel image (initmem) memory: 43484K May 15 00:01:21.114061 kernel: Write protecting the kernel read-only data: 38912k May 15 00:01:21.114070 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 15 00:01:21.114077 kernel: Run /init as init process May 15 00:01:21.114085 kernel: with arguments: May 15 00:01:21.114093 kernel: /init May 15 00:01:21.114101 kernel: with environment: May 15 00:01:21.114111 kernel: HOME=/ May 15 00:01:21.114118 kernel: TERM=linux May 15 00:01:21.114125 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:01:21.114135 systemd[1]: Successfully made /usr/ read-only. May 15 00:01:21.114147 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 00:01:21.114157 systemd[1]: Detected virtualization kvm. May 15 00:01:21.114165 systemd[1]: Detected architecture x86-64. May 15 00:01:21.114173 systemd[1]: Running in initrd. May 15 00:01:21.114183 systemd[1]: No hostname configured, using default hostname. May 15 00:01:21.114192 systemd[1]: Hostname set to . May 15 00:01:21.114200 systemd[1]: Initializing machine ID from random generator. May 15 00:01:21.114208 systemd[1]: Queued start job for default target initrd.target. May 15 00:01:21.114232 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:01:21.114246 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:01:21.114358 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 00:01:21.114369 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:01:21.114377 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 00:01:21.114386 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 00:01:21.114396 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 00:01:21.114404 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 00:01:21.114415 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:01:21.114424 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:01:21.114432 systemd[1]: Reached target paths.target - Path Units. May 15 00:01:21.114441 systemd[1]: Reached target slices.target - Slice Units. May 15 00:01:21.114448 systemd[1]: Reached target swap.target - Swaps. May 15 00:01:21.114457 systemd[1]: Reached target timers.target - Timer Units. May 15 00:01:21.114465 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:01:21.114473 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:01:21.114482 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 00:01:21.114493 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 00:01:21.114501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:01:21.114509 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:01:21.114517 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:01:21.114525 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:01:21.114533 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 00:01:21.114541 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:01:21.114550 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 00:01:21.114748 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:01:21.114756 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:01:21.114764 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:01:21.114772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:01:21.114780 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 00:01:21.114789 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:01:21.114802 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:01:21.114847 systemd-journald[178]: Collecting audit messages is disabled. May 15 00:01:21.114874 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:01:21.114885 systemd-journald[178]: Journal started May 15 00:01:21.114904 systemd-journald[178]: Runtime Journal (/run/log/journal/360a51a254d34a379cf7c5f4a71201bf) is 8M, max 78.3M, 70.3M free. May 15 00:01:21.035811 systemd-modules-load[179]: Inserted module 'overlay' May 15 00:01:21.154242 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:01:21.178645 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:01:21.187425 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:01:21.190133 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:01:21.195164 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:01:21.210442 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:01:21.215296 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:01:21.216004 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:01:21.220620 kernel: Bridge firewalling registered May 15 00:01:21.219582 systemd-modules-load[179]: Inserted module 'br_netfilter' May 15 00:01:21.223011 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:01:21.245060 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:01:21.246658 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:01:21.252617 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 00:01:21.255527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:01:21.269514 dracut-cmdline[208]: dracut-dracut-053 May 15 00:01:21.273520 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=23d816f2beca10c7a75ccdd203c170f89f29125f08ff6f3fdf90f8fa61b342cc May 15 00:01:21.278647 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:01:21.287591 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:01:21.322616 systemd-resolved[231]: Positive Trust Anchors: May 15 00:01:21.322630 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:01:21.322656 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:01:21.329631 systemd-resolved[231]: Defaulting to hostname 'linux'. May 15 00:01:21.330806 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:01:21.331423 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:01:21.350290 kernel: SCSI subsystem initialized May 15 00:01:21.360299 kernel: Loading iSCSI transport class v2.0-870. May 15 00:01:21.370293 kernel: iscsi: registered transport (tcp) May 15 00:01:21.443650 kernel: iscsi: registered transport (qla4xxx) May 15 00:01:21.443690 kernel: QLogic iSCSI HBA Driver May 15 00:01:21.497708 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 00:01:21.503474 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 00:01:21.530057 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:01:21.530107 kernel: device-mapper: uevent: version 1.0.3 May 15 00:01:21.530687 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 00:01:21.574309 kernel: raid6: avx2x4 gen() 34412 MB/s May 15 00:01:21.592291 kernel: raid6: avx2x2 gen() 33323 MB/s May 15 00:01:21.610687 kernel: raid6: avx2x1 gen() 22478 MB/s May 15 00:01:21.610806 kernel: raid6: using algorithm avx2x4 gen() 34412 MB/s May 15 00:01:21.629663 kernel: raid6: .... xor() 3819 MB/s, rmw enabled May 15 00:01:21.629718 kernel: raid6: using avx2x2 recovery algorithm May 15 00:01:21.649293 kernel: xor: automatically using best checksumming function avx May 15 00:01:21.914310 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 00:01:21.928803 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 00:01:21.934436 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:01:21.949998 systemd-udevd[397]: Using default interface naming scheme 'v255'. May 15 00:01:21.954933 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:01:21.962403 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 00:01:21.978768 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation May 15 00:01:22.017523 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:01:22.024527 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:01:22.125907 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:01:22.134475 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 00:01:22.147450 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 00:01:22.150593 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:01:22.152084 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:01:22.153487 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:01:22.160525 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 00:01:22.172816 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 00:01:22.209324 kernel: scsi host0: Virtio SCSI HBA May 15 00:01:22.215295 kernel: cryptd: max_cpu_qlen set to 1000 May 15 00:01:22.304293 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 15 00:01:22.323412 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:01:22.324685 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:01:22.326373 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:01:22.327897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:01:22.328071 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:01:22.330317 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:01:22.333304 kernel: libata version 3.00 loaded. May 15 00:01:22.340042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:01:22.362898 kernel: AVX2 version of gcm_enc/dec engaged. May 15 00:01:22.362917 kernel: AES CTR mode by8 optimization enabled May 15 00:01:22.385330 kernel: ahci 0000:00:1f.2: version 3.0 May 15 00:01:22.385568 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 00:01:22.388301 kernel: sd 0:0:0:0: Power-on or device reset occurred May 15 00:01:22.388497 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 15 00:01:22.388639 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 15 00:01:22.388782 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 00:01:22.388913 kernel: sd 0:0:0:0: [sda] Write Protect is off May 15 00:01:22.398819 kernel: scsi host1: ahci May 15 00:01:22.398997 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 15 00:01:22.399142 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 15 00:01:22.399310 kernel: scsi host2: ahci May 15 00:01:22.402948 kernel: scsi host3: ahci May 15 00:01:22.403106 kernel: scsi host4: ahci May 15 00:01:22.404392 kernel: scsi host5: ahci May 15 00:01:22.417515 kernel: scsi host6: ahci May 15 00:01:22.417688 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 May 15 00:01:22.417702 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 May 15 00:01:22.417714 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 May 15 00:01:22.417726 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 May 15 00:01:22.417737 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 May 15 00:01:22.417748 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 May 15 00:01:22.449078 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:01:22.449105 kernel: GPT:9289727 != 167739391 May 15 00:01:22.449116 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:01:22.449132 kernel: GPT:9289727 != 167739391 May 15 00:01:22.449142 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:01:22.449152 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 00:01:22.452277 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 15 00:01:22.520104 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:01:22.526427 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:01:22.549413 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:01:22.730910 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 00:01:22.730989 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 15 00:01:22.731001 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 00:01:22.731012 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 00:01:22.731022 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 00:01:22.733283 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 00:01:22.807634 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 15 00:01:22.925525 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (478) May 15 00:01:22.932492 kernel: BTRFS: device fsid 11358d57-dfa4-4197-9524-595753ed5512 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (470) May 15 00:01:22.931810 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 15 00:01:22.949493 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 00:01:22.957994 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 15 00:01:22.958846 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 15 00:01:22.965551 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 00:01:22.973040 disk-uuid[572]: Primary Header is updated. May 15 00:01:22.973040 disk-uuid[572]: Secondary Entries is updated. May 15 00:01:22.973040 disk-uuid[572]: Secondary Header is updated. May 15 00:01:22.978284 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 00:01:22.984303 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 00:01:23.987298 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 15 00:01:23.987912 disk-uuid[573]: The operation has completed successfully. May 15 00:01:24.041589 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:01:24.041737 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 00:01:24.082395 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 00:01:24.085980 sh[587]: Success May 15 00:01:24.100963 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 15 00:01:24.149021 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 00:01:24.162370 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 00:01:24.164135 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 00:01:24.194431 kernel: BTRFS info (device dm-0): first mount of filesystem 11358d57-dfa4-4197-9524-595753ed5512 May 15 00:01:24.194490 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 00:01:24.196679 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 00:01:24.200094 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 00:01:24.200110 kernel: BTRFS info (device dm-0): using free space tree May 15 00:01:24.209416 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 15 00:01:24.211324 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 00:01:24.213585 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 00:01:24.222615 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 00:01:24.226427 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 00:01:24.254226 kernel: BTRFS info (device sda6): first mount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:01:24.254316 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:01:24.254330 kernel: BTRFS info (device sda6): using free space tree May 15 00:01:24.258978 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 00:01:24.259004 kernel: BTRFS info (device sda6): auto enabling async discard May 15 00:01:24.265296 kernel: BTRFS info (device sda6): last unmount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:01:24.267950 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 00:01:24.273586 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 00:01:24.626821 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:01:24.632234 ignition[674]: Ignition 2.20.0 May 15 00:01:24.632248 ignition[674]: Stage: fetch-offline May 15 00:01:24.632311 ignition[674]: no configs at "/usr/lib/ignition/base.d" May 15 00:01:24.632322 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 00:01:24.632429 ignition[674]: parsed url from cmdline: "" May 15 00:01:24.637799 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:01:24.632434 ignition[674]: no config URL provided May 15 00:01:24.632439 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:01:24.632448 ignition[674]: no config at "/usr/lib/ignition/user.ign" May 15 00:01:24.632454 ignition[674]: failed to fetch config: resource requires networking May 15 00:01:24.641674 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:01:24.632769 ignition[674]: Ignition finished successfully May 15 00:01:24.667163 systemd-networkd[770]: lo: Link UP May 15 00:01:24.667176 systemd-networkd[770]: lo: Gained carrier May 15 00:01:24.668746 systemd-networkd[770]: Enumeration completed May 15 00:01:24.668820 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:01:24.669448 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:01:24.669453 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:01:24.670206 systemd[1]: Reached target network.target - Network. May 15 00:01:24.676018 systemd-networkd[770]: eth0: Link UP May 15 00:01:24.676024 systemd-networkd[770]: eth0: Gained carrier May 15 00:01:24.676032 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:01:24.684397 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 00:01:24.931869 ignition[774]: Ignition 2.20.0 May 15 00:01:24.931883 ignition[774]: Stage: fetch May 15 00:01:24.932082 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 15 00:01:24.932095 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 00:01:24.932210 ignition[774]: parsed url from cmdline: "" May 15 00:01:24.932214 ignition[774]: no config URL provided May 15 00:01:24.932222 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:01:24.932232 ignition[774]: no config at "/usr/lib/ignition/user.ign" May 15 00:01:24.932281 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #1 May 15 00:01:24.932591 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 00:01:25.133460 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #2 May 15 00:01:25.133689 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 15 00:01:25.148321 systemd-networkd[770]: eth0: DHCPv4 address 172.237.148.154/24, gateway 172.237.148.1 acquired from 23.40.196.199 May 15 00:01:25.533800 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #3 May 15 00:01:25.625062 ignition[774]: PUT result: OK May 15 00:01:25.625126 ignition[774]: GET http://169.254.169.254/v1/user-data: attempt #1 May 15 00:01:25.735927 ignition[774]: GET result: OK May 15 00:01:25.736146 ignition[774]: parsing config with SHA512: f5e7ac886d68cb9ab5ae5074bb71647a76251fd432d54048933b753a1fcb4fd78ae98cc6dfcfbf6761ea3a2c3285fb3d7aead968f58069e86680b43246ff8d23 May 15 00:01:25.742442 unknown[774]: fetched base config from "system" May 15 00:01:25.742454 unknown[774]: fetched base config from "system" May 15 00:01:25.742910 ignition[774]: fetch: fetch complete May 15 00:01:25.742460 unknown[774]: fetched user config from "akamai" May 15 00:01:25.742916 ignition[774]: fetch: fetch passed May 15 00:01:25.746080 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 00:01:25.742956 ignition[774]: Ignition finished successfully May 15 00:01:25.754417 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 00:01:25.768573 ignition[781]: Ignition 2.20.0 May 15 00:01:25.768583 ignition[781]: Stage: kargs May 15 00:01:25.768722 ignition[781]: no configs at "/usr/lib/ignition/base.d" May 15 00:01:25.768733 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 00:01:25.769616 ignition[781]: kargs: kargs passed May 15 00:01:25.769655 ignition[781]: Ignition finished successfully May 15 00:01:25.772747 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 00:01:25.778370 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 00:01:25.804571 ignition[787]: Ignition 2.20.0 May 15 00:01:25.804586 ignition[787]: Stage: disks May 15 00:01:25.804750 ignition[787]: no configs at "/usr/lib/ignition/base.d" May 15 00:01:25.804761 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 00:01:25.805517 ignition[787]: disks: disks passed May 15 00:01:25.806741 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 00:01:25.805555 ignition[787]: Ignition finished successfully May 15 00:01:25.808055 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 00:01:25.808617 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 00:01:25.830827 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:01:25.831771 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:01:25.832895 systemd[1]: Reached target basic.target - Basic System. May 15 00:01:25.839378 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 00:01:25.853502 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 00:01:25.855479 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 00:01:25.856727 systemd-networkd[770]: eth0: Gained IPv6LL May 15 00:01:25.867351 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 00:01:25.953274 kernel: EXT4-fs (sda9): mounted filesystem 36fdaeac-383d-468b-a0a4-9f47e3957a15 r/w with ordered data mode. Quota mode: none. May 15 00:01:25.954222 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 00:01:25.955131 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 00:01:25.967319 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:01:25.969837 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 00:01:25.970603 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 00:01:25.970643 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:01:25.970665 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:01:25.976270 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 00:01:25.982296 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (803) May 15 00:01:25.982415 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 00:01:25.987901 kernel: BTRFS info (device sda6): first mount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:01:25.987928 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:01:25.991000 kernel: BTRFS info (device sda6): using free space tree May 15 00:01:25.996318 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 00:01:25.996348 kernel: BTRFS info (device sda6): auto enabling async discard May 15 00:01:25.997965 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:01:26.037665 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:01:26.042125 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory May 15 00:01:26.050991 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:01:26.055067 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:01:26.154814 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 00:01:26.159481 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 00:01:26.162387 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 00:01:26.169040 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 00:01:26.171893 kernel: BTRFS info (device sda6): last unmount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:01:26.198835 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 00:01:26.200897 ignition[921]: INFO : Ignition 2.20.0 May 15 00:01:26.200897 ignition[921]: INFO : Stage: mount May 15 00:01:26.202170 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:01:26.202170 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 00:01:26.202170 ignition[921]: INFO : mount: mount passed May 15 00:01:26.202170 ignition[921]: INFO : Ignition finished successfully May 15 00:01:26.203122 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 00:01:26.210373 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 00:01:26.961377 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:01:26.975286 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (933) May 15 00:01:26.978832 kernel: BTRFS info (device sda6): first mount of filesystem 26320528-a534-4245-a65e-42f09448b5f1 May 15 00:01:26.978855 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:01:26.980693 kernel: BTRFS info (device sda6): using free space tree May 15 00:01:26.986823 kernel: BTRFS info (device sda6): enabling ssd optimizations May 15 00:01:26.986846 kernel: BTRFS info (device sda6): auto enabling async discard May 15 00:01:26.989330 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:01:27.006326 ignition[949]: INFO : Ignition 2.20.0 May 15 00:01:27.007077 ignition[949]: INFO : Stage: files May 15 00:01:27.008284 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:01:27.008284 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 00:01:27.009775 ignition[949]: DEBUG : files: compiled without relabeling support, skipping May 15 00:01:27.009775 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:01:27.009775 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:01:27.012351 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:01:27.013295 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:01:27.013295 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:01:27.012951 unknown[949]: wrote ssh authorized keys file for user: core May 15 00:01:27.015596 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 00:01:27.015596 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 00:01:27.344381 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:01:27.582843 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 00:01:27.584059 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:01:27.584059 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 00:01:27.870978 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 00:01:28.186598 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:01:28.186598 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 00:01:28.189827 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:01:28.189827 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:01:28.189827 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:01:28.189827 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:01:28.189827 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:01:28.189827 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:01:28.189827 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:01:28.189827 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:01:28.189827 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:01:28.189827 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:01:28.189827 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:01:28.189827 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:01:28.189827 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 15 00:01:28.392013 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 00:01:29.355853 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:01:29.379613 ignition[949]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 00:01:29.379613 ignition[949]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:01:29.379613 ignition[949]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:01:29.379613 ignition[949]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 00:01:29.379613 ignition[949]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 00:01:29.379613 ignition[949]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 00:01:29.379613 ignition[949]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 15 00:01:29.379613 ignition[949]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 00:01:29.379613 ignition[949]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 15 00:01:29.379613 ignition[949]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:01:29.379613 ignition[949]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:01:29.379613 ignition[949]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:01:29.379613 ignition[949]: INFO : files: files passed May 15 00:01:29.379613 ignition[949]: INFO : Ignition finished successfully May 15 00:01:29.362819 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 00:01:29.392552 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 00:01:29.396694 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 00:01:29.397966 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:01:29.398068 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 00:01:29.411541 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:01:29.411541 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 00:01:29.413342 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:01:29.414389 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:01:29.415333 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 00:01:29.421591 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 00:01:29.442712 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:01:29.442836 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 00:01:29.444210 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 00:01:29.445419 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 00:01:29.446773 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 00:01:29.451353 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 00:01:29.463572 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:01:29.470567 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 00:01:29.478896 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 00:01:29.479894 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:01:29.481318 systemd[1]: Stopped target timers.target - Timer Units. May 15 00:01:29.482623 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:01:29.482755 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:01:29.483937 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 00:01:29.484767 systemd[1]: Stopped target basic.target - Basic System. May 15 00:01:29.485988 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 00:01:29.487059 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:01:29.488149 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 00:01:29.489638 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 00:01:29.490866 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:01:29.492069 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 00:01:29.493173 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 00:01:29.494342 systemd[1]: Stopped target swap.target - Swaps. May 15 00:01:29.495425 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:01:29.495522 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 00:01:29.496744 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 00:01:29.497617 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:01:29.498822 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 00:01:29.499158 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:01:29.500170 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:01:29.500281 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 00:01:29.502157 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:01:29.502276 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:01:29.503179 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:01:29.503339 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 00:01:29.509445 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 00:01:29.509991 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:01:29.510143 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:01:29.512396 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 00:01:29.517596 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:01:29.517716 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:01:29.518333 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:01:29.518425 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:01:29.526984 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:01:29.527758 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 00:01:29.548363 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:01:29.553032 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:01:29.553756 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 00:01:29.561674 ignition[1003]: INFO : Ignition 2.20.0 May 15 00:01:29.561674 ignition[1003]: INFO : Stage: umount May 15 00:01:29.563084 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:01:29.563084 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 15 00:01:29.563084 ignition[1003]: INFO : umount: umount passed May 15 00:01:29.565190 ignition[1003]: INFO : Ignition finished successfully May 15 00:01:29.564703 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:01:29.564815 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 00:01:29.566075 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:01:29.566151 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 00:01:29.566760 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:01:29.566837 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 00:01:29.567838 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 00:01:29.567883 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 00:01:29.568955 systemd[1]: Stopped target network.target - Network. May 15 00:01:29.591988 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:01:29.592044 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:01:29.593043 systemd[1]: Stopped target paths.target - Path Units. May 15 00:01:29.594005 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:01:29.598337 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:01:29.598903 systemd[1]: Stopped target slices.target - Slice Units. May 15 00:01:29.600184 systemd[1]: Stopped target sockets.target - Socket Units. May 15 00:01:29.601842 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:01:29.601885 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:01:29.602969 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:01:29.603005 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:01:29.604136 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:01:29.604186 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 00:01:29.605309 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 00:01:29.605354 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 00:01:29.606636 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:01:29.606681 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 00:01:29.607594 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 00:01:29.608701 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 00:01:29.612793 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:01:29.612897 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 00:01:29.618750 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 00:01:29.619015 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:01:29.619149 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 00:01:29.621337 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 00:01:29.622514 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:01:29.622570 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 00:01:29.628316 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 00:01:29.628816 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:01:29.628867 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:01:29.629680 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:01:29.629726 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:01:29.632556 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:01:29.632605 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 00:01:29.633353 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 00:01:29.633403 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:01:29.635149 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:01:29.637481 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 00:01:29.637544 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 00:01:29.648042 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:01:29.648148 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 00:01:29.654977 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:01:29.655143 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:01:29.656577 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:01:29.656622 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 00:01:29.657770 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:01:29.657805 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:01:29.659045 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:01:29.659091 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 00:01:29.660635 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:01:29.660681 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 00:01:29.661867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:01:29.661913 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:01:29.672400 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 00:01:29.673587 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:01:29.673793 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:01:29.676620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:01:29.676667 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:01:29.679026 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 00:01:29.679088 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 00:01:29.679610 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:01:29.679707 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 00:01:29.680961 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 00:01:29.689629 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 00:01:29.696643 systemd[1]: Switching root. May 15 00:01:29.727450 systemd-journald[178]: Journal stopped May 15 00:01:30.888320 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). May 15 00:01:30.888344 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:01:30.888355 kernel: SELinux: policy capability open_perms=1 May 15 00:01:30.888365 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:01:30.888373 kernel: SELinux: policy capability always_check_network=0 May 15 00:01:30.888385 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:01:30.888395 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:01:30.888404 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:01:30.888412 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:01:30.888421 kernel: audit: type=1403 audit(1747267289.866:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:01:30.888431 systemd[1]: Successfully loaded SELinux policy in 50.185ms. May 15 00:01:30.888444 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.023ms. May 15 00:01:30.888455 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 00:01:30.888465 systemd[1]: Detected virtualization kvm. May 15 00:01:30.888475 systemd[1]: Detected architecture x86-64. May 15 00:01:30.888485 systemd[1]: Detected first boot. May 15 00:01:30.888498 systemd[1]: Initializing machine ID from random generator. May 15 00:01:30.888507 zram_generator::config[1048]: No configuration found. May 15 00:01:30.888518 kernel: Guest personality initialized and is inactive May 15 00:01:30.888527 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 00:01:30.888536 kernel: Initialized host personality May 15 00:01:30.888545 kernel: NET: Registered PF_VSOCK protocol family May 15 00:01:30.888554 systemd[1]: Populated /etc with preset unit settings. May 15 00:01:30.888567 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 00:01:30.888577 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:01:30.888586 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 00:01:30.888596 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:01:30.888607 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 00:01:30.888617 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 00:01:30.888627 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 00:01:30.888639 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 00:01:30.888649 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 00:01:30.888659 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 00:01:30.888669 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 00:01:30.888678 systemd[1]: Created slice user.slice - User and Session Slice. May 15 00:01:30.888688 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:01:30.888699 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:01:30.888708 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 00:01:30.888718 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 00:01:30.888731 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 00:01:30.888744 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:01:30.888754 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 00:01:30.888765 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:01:30.888775 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 00:01:30.888785 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 00:01:30.888795 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 00:01:30.888807 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 00:01:30.888817 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:01:30.888827 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:01:30.888837 systemd[1]: Reached target slices.target - Slice Units. May 15 00:01:30.888847 systemd[1]: Reached target swap.target - Swaps. May 15 00:01:30.888857 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 00:01:30.888867 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 00:01:30.889488 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 00:01:30.889509 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:01:30.889528 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:01:30.889538 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:01:30.889549 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 00:01:30.889559 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 00:01:30.889572 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 00:01:30.889581 systemd[1]: Mounting media.mount - External Media Directory... May 15 00:01:30.889592 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:01:30.889602 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 00:01:30.889612 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 00:01:30.889622 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 00:01:30.889632 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:01:30.889643 systemd[1]: Reached target machines.target - Containers. May 15 00:01:30.889655 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 00:01:30.889665 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:01:30.889675 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:01:30.889685 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 00:01:30.889695 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:01:30.889716 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:01:30.889727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:01:30.889737 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 00:01:30.889747 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:01:30.889760 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:01:30.889770 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:01:30.889780 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 00:01:30.889790 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:01:30.889800 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:01:30.889810 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:01:30.889820 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:01:30.889830 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:01:30.889842 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 00:01:30.889852 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 00:01:30.889862 kernel: loop: module loaded May 15 00:01:30.889872 kernel: fuse: init (API version 7.39) May 15 00:01:30.889882 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 00:01:30.889892 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:01:30.889902 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:01:30.889912 systemd[1]: Stopped verity-setup.service. May 15 00:01:30.889924 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:01:30.889941 kernel: ACPI: bus type drm_connector registered May 15 00:01:30.889951 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 00:01:30.889961 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 00:01:30.889971 systemd[1]: Mounted media.mount - External Media Directory. May 15 00:01:30.889981 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 00:01:30.890009 systemd-journald[1139]: Collecting audit messages is disabled. May 15 00:01:30.890033 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 00:01:30.890226 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 00:01:30.890238 systemd-journald[1139]: Journal started May 15 00:01:30.890277 systemd-journald[1139]: Runtime Journal (/run/log/journal/f4f4611e712946c294fc739f9efdb7c1) is 8M, max 78.3M, 70.3M free. May 15 00:01:30.474720 systemd[1]: Queued start job for default target multi-user.target. May 15 00:01:30.487073 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 15 00:01:30.487560 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:01:30.893281 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:01:30.896190 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 00:01:30.897382 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:01:30.898704 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:01:30.898976 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 00:01:30.900012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:01:30.900214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:01:30.901715 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:01:30.901906 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:01:30.902782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:01:30.903020 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:01:30.903912 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:01:30.904181 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 00:01:30.905051 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:01:30.905461 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:01:30.906445 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:01:30.907677 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 00:01:30.908585 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 00:01:30.925016 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 00:01:30.933333 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 00:01:30.940535 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 00:01:30.941724 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:01:30.941809 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:01:30.944065 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 00:01:30.950110 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 00:01:30.953354 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 00:01:30.954005 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:01:30.962203 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 00:01:30.965375 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 00:01:30.966537 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:01:30.972381 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 00:01:30.972962 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:01:30.980494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:01:30.995390 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 00:01:31.085120 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 00:01:31.094344 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 00:01:31.095724 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:01:31.096568 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 00:01:31.098178 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 00:01:31.099427 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 00:01:31.102320 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 00:01:31.122511 kernel: loop0: detected capacity change from 0 to 147912 May 15 00:01:31.128960 systemd-journald[1139]: Time spent on flushing to /var/log/journal/f4f4611e712946c294fc739f9efdb7c1 is 33.804ms for 994 entries. May 15 00:01:31.128960 systemd-journald[1139]: System Journal (/var/log/journal/f4f4611e712946c294fc739f9efdb7c1) is 8M, max 195.6M, 187.6M free. May 15 00:01:31.372235 systemd-journald[1139]: Received client request to flush runtime journal. May 15 00:01:31.303145 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 00:01:31.316482 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 00:01:31.331640 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 00:01:31.359578 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 00:01:31.376242 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 00:01:31.384967 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 00:01:31.390272 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:01:31.394658 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:01:31.400233 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 00:01:31.413149 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:01:31.421349 kernel: loop1: detected capacity change from 0 to 138176 May 15 00:01:31.429766 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 15 00:01:31.430415 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 15 00:01:31.436681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:01:31.465481 kernel: loop2: detected capacity change from 0 to 8 May 15 00:01:31.490174 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:01:31.495501 kernel: loop3: detected capacity change from 0 to 218376 May 15 00:01:31.575314 kernel: loop4: detected capacity change from 0 to 147912 May 15 00:01:31.835299 kernel: loop5: detected capacity change from 0 to 138176 May 15 00:01:31.862283 kernel: loop6: detected capacity change from 0 to 8 May 15 00:01:31.867512 kernel: loop7: detected capacity change from 0 to 218376 May 15 00:01:31.916176 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 15 00:01:31.921242 (sd-merge)[1201]: Merged extensions into '/usr'. May 15 00:01:31.928897 systemd[1]: Reload requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... May 15 00:01:31.928914 systemd[1]: Reloading... May 15 00:01:32.374676 zram_generator::config[1225]: No configuration found. May 15 00:01:32.417281 ldconfig[1168]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:01:32.519834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:01:32.578364 systemd[1]: Reloading finished in 648 ms. May 15 00:01:32.594218 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 00:01:32.595910 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 00:01:32.609417 systemd[1]: Starting ensure-sysext.service... May 15 00:01:32.615495 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:01:32.691668 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 00:01:32.702517 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:01:32.707721 systemd[1]: Reload requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... May 15 00:01:32.707734 systemd[1]: Reloading... May 15 00:01:32.710123 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:01:32.710658 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 00:01:32.711819 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:01:32.712061 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. May 15 00:01:32.712128 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. May 15 00:01:32.723215 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:01:32.723228 systemd-tmpfiles[1273]: Skipping /boot May 15 00:01:32.742549 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:01:32.742563 systemd-tmpfiles[1273]: Skipping /boot May 15 00:01:32.756852 systemd-udevd[1275]: Using default interface naming scheme 'v255'. May 15 00:01:32.819348 zram_generator::config[1310]: No configuration found. May 15 00:01:33.003579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:01:33.005306 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 00:01:33.050276 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 00:01:33.050666 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 15 00:01:33.050849 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 00:01:33.063040 kernel: ACPI: button: Power Button [PWRF] May 15 00:01:33.075421 systemd[1]: Reloading finished in 367 ms. May 15 00:01:33.084949 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:01:33.086041 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:01:33.090370 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 00:01:33.102282 kernel: EDAC MC: Ver: 3.0.0 May 15 00:01:33.133238 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 00:01:33.134939 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:01:33.146605 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:01:33.159811 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 00:01:33.161432 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:01:33.164304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:01:33.173583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:01:33.176552 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:01:33.177738 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:01:33.177890 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:01:33.184277 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1303) May 15 00:01:33.214292 kernel: mousedev: PS/2 mouse device common for all mice May 15 00:01:33.219693 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 00:01:33.230588 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:01:33.246758 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:01:33.252202 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 00:01:33.264509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:01:33.265099 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:01:33.267835 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:01:33.268051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:01:33.269146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:01:33.274044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:01:33.275871 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:01:33.277167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:01:33.332304 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:01:33.332543 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:01:33.502978 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:01:33.513572 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:01:33.515878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:01:33.524498 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:01:33.525176 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:01:33.525290 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 00:01:33.545771 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 00:01:33.546694 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:01:33.553709 systemd[1]: Finished ensure-sysext.service. May 15 00:01:33.554895 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 00:01:33.559672 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:01:33.559859 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:01:33.561000 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 00:01:33.569551 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 00:01:33.574404 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 00:01:33.579307 augenrules[1422]: No rules May 15 00:01:33.583752 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:01:33.585886 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:01:33.587135 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:01:33.588730 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:01:33.590589 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:01:33.591403 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:01:33.593534 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:01:33.594162 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:01:33.605125 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 15 00:01:33.612450 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 00:01:33.616426 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 00:01:33.617003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:01:33.617062 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:01:33.628376 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 00:01:33.632145 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 00:01:33.632732 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:01:33.644363 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 00:01:33.675673 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:01:33.674671 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 00:01:33.726242 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 00:01:33.730082 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:01:33.735506 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 00:01:33.736296 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:01:33.746420 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 00:01:33.756014 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:01:33.783522 systemd-networkd[1386]: lo: Link UP May 15 00:01:33.783534 systemd-networkd[1386]: lo: Gained carrier May 15 00:01:33.785208 systemd-networkd[1386]: Enumeration completed May 15 00:01:33.785429 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:01:33.787366 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:01:33.787370 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:01:33.790527 systemd-networkd[1386]: eth0: Link UP May 15 00:01:33.790539 systemd-networkd[1386]: eth0: Gained carrier May 15 00:01:33.790552 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:01:33.794465 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 00:01:33.799403 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 00:01:33.800287 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 00:01:33.800681 systemd-resolved[1392]: Positive Trust Anchors: May 15 00:01:33.800939 systemd-resolved[1392]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:01:33.801017 systemd-resolved[1392]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:01:33.810898 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 00:01:33.811612 systemd[1]: Reached target time-set.target - System Time Set. May 15 00:01:33.813580 systemd-resolved[1392]: Defaulting to hostname 'linux'. May 15 00:01:33.816493 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:01:33.817212 systemd[1]: Reached target network.target - Network. May 15 00:01:33.817785 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:01:33.818422 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:01:33.819083 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 00:01:33.819689 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 00:01:33.820475 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 00:01:33.821118 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 00:01:33.821699 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 00:01:33.822251 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:01:33.822306 systemd[1]: Reached target paths.target - Path Units. May 15 00:01:33.822789 systemd[1]: Reached target timers.target - Timer Units. May 15 00:01:33.824644 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 00:01:33.826741 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 00:01:33.829694 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 00:01:33.830439 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 00:01:33.831007 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 00:01:33.833884 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 00:01:33.834806 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 00:01:33.836115 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 00:01:33.836900 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 00:01:33.838129 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:01:33.838687 systemd[1]: Reached target basic.target - Basic System. May 15 00:01:33.839245 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 00:01:33.839302 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 00:01:33.856366 systemd[1]: Starting containerd.service - containerd container runtime... May 15 00:01:33.859423 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 00:01:33.863555 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 00:01:33.865384 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 00:01:33.869433 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 00:01:33.870382 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 00:01:33.882892 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 00:01:33.892146 jq[1463]: false May 15 00:01:33.899376 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 00:01:33.907392 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 00:01:33.910298 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 00:01:33.913374 extend-filesystems[1464]: Found loop4 May 15 00:01:33.913374 extend-filesystems[1464]: Found loop5 May 15 00:01:33.913374 extend-filesystems[1464]: Found loop6 May 15 00:01:33.913374 extend-filesystems[1464]: Found loop7 May 15 00:01:33.913374 extend-filesystems[1464]: Found sda May 15 00:01:33.913374 extend-filesystems[1464]: Found sda1 May 15 00:01:33.913374 extend-filesystems[1464]: Found sda2 May 15 00:01:33.913374 extend-filesystems[1464]: Found sda3 May 15 00:01:33.913374 extend-filesystems[1464]: Found usr May 15 00:01:33.913374 extend-filesystems[1464]: Found sda4 May 15 00:01:33.913374 extend-filesystems[1464]: Found sda6 May 15 00:01:33.913374 extend-filesystems[1464]: Found sda7 May 15 00:01:33.913374 extend-filesystems[1464]: Found sda9 May 15 00:01:33.913374 extend-filesystems[1464]: Checking size of /dev/sda9 May 15 00:01:34.011933 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 15 00:01:33.918408 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 00:01:33.948976 dbus-daemon[1462]: [system] SELinux support is enabled May 15 00:01:34.013268 extend-filesystems[1464]: Resized partition /dev/sda9 May 15 00:01:33.921637 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:01:34.014416 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) May 15 00:01:33.922081 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:01:33.969356 systemd[1]: Starting update-engine.service - Update Engine... May 15 00:01:34.023086 update_engine[1476]: I20250515 00:01:34.004153 1476 main.cc:92] Flatcar Update Engine starting May 15 00:01:34.005335 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 00:01:34.044578 update_engine[1476]: I20250515 00:01:34.025444 1476 update_check_scheduler.cc:74] Next update check in 3m42s May 15 00:01:34.044600 coreos-metadata[1461]: May 15 00:01:34.028 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 15 00:01:34.009878 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 00:01:34.026816 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:01:34.027053 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 00:01:34.027439 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:01:34.027709 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 00:01:34.044656 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:01:34.044877 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 00:01:34.056571 jq[1490]: true May 15 00:01:34.072790 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:01:34.072854 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 00:01:34.073691 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:01:34.073716 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 00:01:34.078403 systemd[1]: Started update-engine.service - Update Engine. May 15 00:01:34.078574 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 00:01:34.088878 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 00:01:34.257308 jq[1499]: true May 15 00:01:34.349305 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1311) May 15 00:01:34.433407 systemd-networkd[1386]: eth0: DHCPv4 address 172.237.148.154/24, gateway 172.237.148.1 acquired from 23.40.196.199 May 15 00:01:34.433719 dbus-daemon[1462]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1386 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 15 00:01:34.453030 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. May 15 00:01:34.454481 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 15 00:01:34.517621 tar[1493]: linux-amd64/LICENSE May 15 00:01:34.517621 tar[1493]: linux-amd64/helm May 15 00:01:34.527541 systemd-logind[1475]: Watching system buttons on /dev/input/event1 (Power Button) May 15 00:01:34.527584 systemd-logind[1475]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 00:01:34.535413 systemd-logind[1475]: New seat seat0. May 15 00:01:34.551787 systemd[1]: Started systemd-logind.service - User Login Management. May 15 00:01:35.264919 systemd-resolved[1392]: Clock change detected. Flushing caches. May 15 00:01:35.267264 systemd-timesyncd[1442]: Contacted time server 102.129.185.135:123 (0.flatcar.pool.ntp.org). May 15 00:01:35.267572 systemd-timesyncd[1442]: Initial clock synchronization to Thu 2025-05-15 00:01:35.264069 UTC. May 15 00:01:35.268848 bash[1523]: Updated "/home/core/.ssh/authorized_keys" May 15 00:01:35.269608 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 00:01:35.435577 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 15 00:01:35.433448 systemd[1]: Starting sshkeys.service... May 15 00:01:35.464963 extend-filesystems[1485]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 15 00:01:35.464963 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 10 May 15 00:01:35.464963 extend-filesystems[1485]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 15 00:01:35.470174 extend-filesystems[1464]: Resized filesystem in /dev/sda9 May 15 00:01:35.471501 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:01:35.471767 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 00:01:35.484370 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 00:01:35.492348 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 00:01:35.513250 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 15 00:01:35.514144 dbus-daemon[1462]: [system] Successfully activated service 'org.freedesktop.hostname1' May 15 00:01:35.515428 dbus-daemon[1462]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1508 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 15 00:01:35.535111 systemd[1]: Starting polkit.service - Authorization Manager... May 15 00:01:35.554735 polkitd[1537]: Started polkitd version 121 May 15 00:01:35.555187 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:01:35.641162 polkitd[1537]: Loading rules from directory /etc/polkit-1/rules.d May 15 00:01:35.641236 polkitd[1537]: Loading rules from directory /usr/share/polkit-1/rules.d May 15 00:01:35.644800 polkitd[1537]: Finished loading, compiling and executing 2 rules May 15 00:01:35.646674 dbus-daemon[1462]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 15 00:01:35.646904 systemd[1]: Started polkit.service - Authorization Manager. May 15 00:01:35.649493 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 00:01:35.650277 polkitd[1537]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 15 00:01:35.659217 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:01:35.815538 systemd-networkd[1386]: eth0: Gained IPv6LL May 15 00:01:35.819241 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 00:01:35.823019 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 00:01:35.824504 systemd[1]: Reached target network-online.target - Network is Online. May 15 00:01:35.833762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:01:35.842291 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 00:01:35.847294 systemd-hostnamed[1508]: Hostname set to <172-237-148-154> (transient) May 15 00:01:35.847651 systemd-resolved[1392]: System hostname changed to '172-237-148-154'. May 15 00:01:35.855558 coreos-metadata[1461]: May 15 00:01:35.854 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 15 00:01:35.856376 coreos-metadata[1536]: May 15 00:01:35.856 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 15 00:01:35.862762 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:01:35.863604 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 00:01:35.873276 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 00:01:35.937478 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 00:01:35.983024 coreos-metadata[1536]: May 15 00:01:35.982 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 15 00:01:35.985337 containerd[1498]: time="2025-05-15T00:01:35.985233640Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 00:01:35.987391 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 00:01:36.000250 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 00:01:36.012646 coreos-metadata[1461]: May 15 00:01:36.012 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 15 00:01:36.017326 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 00:01:36.018093 systemd[1]: Reached target getty.target - Login Prompts. May 15 00:01:36.122903 containerd[1498]: time="2025-05-15T00:01:36.122294750Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:01:36.129438 containerd[1498]: time="2025-05-15T00:01:36.129365490Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:01:36.129438 containerd[1498]: time="2025-05-15T00:01:36.129434380Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:01:36.129518 containerd[1498]: time="2025-05-15T00:01:36.129461010Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:01:36.129686 containerd[1498]: time="2025-05-15T00:01:36.129659490Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 00:01:36.129715 containerd[1498]: time="2025-05-15T00:01:36.129700510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 00:01:36.129808 containerd[1498]: time="2025-05-15T00:01:36.129783020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:01:36.129808 containerd[1498]: time="2025-05-15T00:01:36.129805320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:01:36.130749 containerd[1498]: time="2025-05-15T00:01:36.130065260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:01:36.130749 containerd[1498]: time="2025-05-15T00:01:36.130089840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:01:36.130749 containerd[1498]: time="2025-05-15T00:01:36.130103870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:01:36.130749 containerd[1498]: time="2025-05-15T00:01:36.130113090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:01:36.130749 containerd[1498]: time="2025-05-15T00:01:36.130224840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:01:36.130749 containerd[1498]: time="2025-05-15T00:01:36.130591860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:01:36.131071 containerd[1498]: time="2025-05-15T00:01:36.130768340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:01:36.131071 containerd[1498]: time="2025-05-15T00:01:36.130781450Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:01:36.131071 containerd[1498]: time="2025-05-15T00:01:36.130882620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:01:36.131071 containerd[1498]: time="2025-05-15T00:01:36.130946960Z" level=info msg="metadata content store policy set" policy=shared May 15 00:01:36.131377 coreos-metadata[1536]: May 15 00:01:36.131 INFO Fetch successful May 15 00:01:36.137886 containerd[1498]: time="2025-05-15T00:01:36.137152790Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:01:36.137886 containerd[1498]: time="2025-05-15T00:01:36.137222710Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:01:36.137886 containerd[1498]: time="2025-05-15T00:01:36.137253880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 00:01:36.137886 containerd[1498]: time="2025-05-15T00:01:36.137275670Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 00:01:36.137886 containerd[1498]: time="2025-05-15T00:01:36.137290320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:01:36.137886 containerd[1498]: time="2025-05-15T00:01:36.137431000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144283430Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144449780Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144466220Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144479970Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144493300Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144506100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144518330Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144531230Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144562270Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144578320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144600020Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144611280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144645530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145000 containerd[1498]: time="2025-05-15T00:01:36.144659500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144680280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144696670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144712930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144724650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144735250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144750020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144761240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144774250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144785390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144799040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144809380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144825000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144876750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144889810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145352 containerd[1498]: time="2025-05-15T00:01:36.144899700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:01:36.145700 containerd[1498]: time="2025-05-15T00:01:36.144967780Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:01:36.145700 containerd[1498]: time="2025-05-15T00:01:36.144982940Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 00:01:36.145700 containerd[1498]: time="2025-05-15T00:01:36.144992290Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:01:36.145700 containerd[1498]: time="2025-05-15T00:01:36.145004140Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 00:01:36.145700 containerd[1498]: time="2025-05-15T00:01:36.145013150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145700 containerd[1498]: time="2025-05-15T00:01:36.145059260Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 00:01:36.145700 containerd[1498]: time="2025-05-15T00:01:36.145077090Z" level=info msg="NRI interface is disabled by configuration." May 15 00:01:36.145700 containerd[1498]: time="2025-05-15T00:01:36.145095040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:01:36.145868 containerd[1498]: time="2025-05-15T00:01:36.145583490Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:01:36.145868 containerd[1498]: time="2025-05-15T00:01:36.145646440Z" level=info msg="Connect containerd service" May 15 00:01:36.145868 containerd[1498]: time="2025-05-15T00:01:36.145700350Z" level=info msg="using legacy CRI server" May 15 00:01:36.145868 containerd[1498]: time="2025-05-15T00:01:36.145711750Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 00:01:36.145868 containerd[1498]: time="2025-05-15T00:01:36.145876170Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:01:36.159648 containerd[1498]: time="2025-05-15T00:01:36.159543240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:01:36.159955 containerd[1498]: time="2025-05-15T00:01:36.159837090Z" level=info msg="Start subscribing containerd event" May 15 00:01:36.160078 containerd[1498]: time="2025-05-15T00:01:36.160058790Z" level=info msg="Start recovering state" May 15 00:01:36.160285 containerd[1498]: time="2025-05-15T00:01:36.160265250Z" level=info msg="Start event monitor" May 15 00:01:36.160338 containerd[1498]: time="2025-05-15T00:01:36.160320880Z" level=info msg="Start snapshots syncer" May 15 00:01:36.160365 containerd[1498]: time="2025-05-15T00:01:36.160349280Z" level=info msg="Start cni network conf syncer for default" May 15 00:01:36.160389 containerd[1498]: time="2025-05-15T00:01:36.160378630Z" level=info msg="Start streaming server" May 15 00:01:36.161459 containerd[1498]: time="2025-05-15T00:01:36.161429120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:01:36.161544 containerd[1498]: time="2025-05-15T00:01:36.161522720Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:01:36.161702 systemd[1]: Started containerd.service - containerd container runtime. May 15 00:01:36.163390 containerd[1498]: time="2025-05-15T00:01:36.163202420Z" level=info msg="containerd successfully booted in 0.185453s" May 15 00:01:36.186598 update-ssh-keys[1583]: Updated "/home/core/.ssh/authorized_keys" May 15 00:01:36.189310 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 00:01:36.194744 systemd[1]: Finished sshkeys.service. May 15 00:01:36.194990 coreos-metadata[1461]: May 15 00:01:36.194 INFO Fetch successful May 15 00:01:36.194990 coreos-metadata[1461]: May 15 00:01:36.194 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 15 00:01:36.510163 coreos-metadata[1461]: May 15 00:01:36.506 INFO Fetch successful May 15 00:01:36.690863 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 00:01:36.691847 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 00:01:36.969599 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 00:01:36.982418 systemd[1]: Started sshd@0-172.237.148.154:22-139.178.68.195:57140.service - OpenSSH per-connection server daemon (139.178.68.195:57140). May 15 00:01:37.286635 tar[1493]: linux-amd64/README.md May 15 00:01:37.306386 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 00:01:37.431195 sshd[1607]: Accepted publickey for core from 139.178.68.195 port 57140 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:01:37.433664 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:37.532885 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 00:01:37.546844 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 00:01:37.557455 systemd-logind[1475]: New session 1 of user core. May 15 00:01:37.581955 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 00:01:37.591619 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 00:01:37.607720 (systemd)[1614]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:01:37.612863 systemd-logind[1475]: New session c1 of user core. May 15 00:01:37.782781 systemd[1614]: Queued start job for default target default.target. May 15 00:01:37.954897 systemd[1614]: Created slice app.slice - User Application Slice. May 15 00:01:37.954943 systemd[1614]: Reached target paths.target - Paths. May 15 00:01:37.955461 systemd[1614]: Reached target timers.target - Timers. May 15 00:01:37.958523 systemd[1614]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 00:01:37.989940 systemd[1614]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 00:01:37.990087 systemd[1614]: Reached target sockets.target - Sockets. May 15 00:01:37.990186 systemd[1614]: Reached target basic.target - Basic System. May 15 00:01:37.990267 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 00:01:37.990848 systemd[1614]: Reached target default.target - Main User Target. May 15 00:01:37.990899 systemd[1614]: Startup finished in 364ms. May 15 00:01:38.002154 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 00:01:38.345310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:01:38.346573 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 00:01:38.348108 systemd[1]: Startup finished in 961ms (kernel) + 9.156s (initrd) + 7.889s (userspace) = 18.007s. May 15 00:01:38.381256 systemd[1]: Started sshd@1-172.237.148.154:22-139.178.68.195:57146.service - OpenSSH per-connection server daemon (139.178.68.195:57146). May 15 00:01:38.387688 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:01:38.876369 sshd[1631]: Accepted publickey for core from 139.178.68.195 port 57146 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:01:38.878202 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:38.890399 systemd-logind[1475]: New session 2 of user core. May 15 00:01:38.897166 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 00:01:39.137169 sshd[1641]: Connection closed by 139.178.68.195 port 57146 May 15 00:01:39.138821 sshd-session[1631]: pam_unix(sshd:session): session closed for user core May 15 00:01:39.144602 systemd[1]: sshd@1-172.237.148.154:22-139.178.68.195:57146.service: Deactivated successfully. May 15 00:01:39.147788 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:01:39.148593 systemd-logind[1475]: Session 2 logged out. Waiting for processes to exit. May 15 00:01:39.150627 systemd-logind[1475]: Removed session 2. May 15 00:01:39.209819 systemd[1]: Started sshd@2-172.237.148.154:22-139.178.68.195:57150.service - OpenSSH per-connection server daemon (139.178.68.195:57150). May 15 00:01:39.490229 kubelet[1629]: E0515 00:01:39.489994 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:01:39.496557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:01:39.496780 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:01:39.497405 systemd[1]: kubelet.service: Consumed 2.628s CPU time, 253M memory peak. May 15 00:01:39.538878 sshd[1647]: Accepted publickey for core from 139.178.68.195 port 57150 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:01:39.541339 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:39.547659 systemd-logind[1475]: New session 3 of user core. May 15 00:01:39.558159 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 00:01:39.779340 sshd[1652]: Connection closed by 139.178.68.195 port 57150 May 15 00:01:39.780587 sshd-session[1647]: pam_unix(sshd:session): session closed for user core May 15 00:01:39.784855 systemd[1]: sshd@2-172.237.148.154:22-139.178.68.195:57150.service: Deactivated successfully. May 15 00:01:39.787476 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:01:39.788200 systemd-logind[1475]: Session 3 logged out. Waiting for processes to exit. May 15 00:01:39.789070 systemd-logind[1475]: Removed session 3. May 15 00:01:39.854688 systemd[1]: Started sshd@3-172.237.148.154:22-139.178.68.195:57164.service - OpenSSH per-connection server daemon (139.178.68.195:57164). May 15 00:01:40.179567 sshd[1658]: Accepted publickey for core from 139.178.68.195 port 57164 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:01:40.181151 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:40.185982 systemd-logind[1475]: New session 4 of user core. May 15 00:01:40.194176 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 00:01:40.427194 sshd[1660]: Connection closed by 139.178.68.195 port 57164 May 15 00:01:40.427799 sshd-session[1658]: pam_unix(sshd:session): session closed for user core May 15 00:01:40.432020 systemd[1]: sshd@3-172.237.148.154:22-139.178.68.195:57164.service: Deactivated successfully. May 15 00:01:40.434280 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:01:40.436090 systemd-logind[1475]: Session 4 logged out. Waiting for processes to exit. May 15 00:01:40.437685 systemd-logind[1475]: Removed session 4. May 15 00:01:40.495272 systemd[1]: Started sshd@4-172.237.148.154:22-139.178.68.195:57168.service - OpenSSH per-connection server daemon (139.178.68.195:57168). May 15 00:01:40.813260 sshd[1666]: Accepted publickey for core from 139.178.68.195 port 57168 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:01:40.815098 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:40.820268 systemd-logind[1475]: New session 5 of user core. May 15 00:01:40.829406 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 00:01:41.014685 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 00:01:41.015002 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:01:41.030972 sudo[1669]: pam_unix(sudo:session): session closed for user root May 15 00:01:41.080508 sshd[1668]: Connection closed by 139.178.68.195 port 57168 May 15 00:01:41.081444 sshd-session[1666]: pam_unix(sshd:session): session closed for user core May 15 00:01:41.084831 systemd-logind[1475]: Session 5 logged out. Waiting for processes to exit. May 15 00:01:41.085628 systemd[1]: sshd@4-172.237.148.154:22-139.178.68.195:57168.service: Deactivated successfully. May 15 00:01:41.087502 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:01:41.088389 systemd-logind[1475]: Removed session 5. May 15 00:01:41.148228 systemd[1]: Started sshd@5-172.237.148.154:22-139.178.68.195:57178.service - OpenSSH per-connection server daemon (139.178.68.195:57178). May 15 00:01:41.484221 sshd[1675]: Accepted publickey for core from 139.178.68.195 port 57178 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:01:41.485720 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:41.490592 systemd-logind[1475]: New session 6 of user core. May 15 00:01:41.495144 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 00:01:41.689592 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 00:01:41.689938 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:01:41.693788 sudo[1679]: pam_unix(sudo:session): session closed for user root May 15 00:01:41.700806 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 00:01:41.701294 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:01:41.719273 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:01:41.751114 augenrules[1701]: No rules May 15 00:01:41.752699 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:01:41.752971 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:01:41.754083 sudo[1678]: pam_unix(sudo:session): session closed for user root May 15 00:01:41.809421 sshd[1677]: Connection closed by 139.178.68.195 port 57178 May 15 00:01:41.809918 sshd-session[1675]: pam_unix(sshd:session): session closed for user core May 15 00:01:41.812755 systemd[1]: sshd@5-172.237.148.154:22-139.178.68.195:57178.service: Deactivated successfully. May 15 00:01:41.814620 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:01:41.815896 systemd-logind[1475]: Session 6 logged out. Waiting for processes to exit. May 15 00:01:41.816901 systemd-logind[1475]: Removed session 6. May 15 00:01:41.879216 systemd[1]: Started sshd@6-172.237.148.154:22-139.178.68.195:57182.service - OpenSSH per-connection server daemon (139.178.68.195:57182). May 15 00:01:42.213288 sshd[1710]: Accepted publickey for core from 139.178.68.195 port 57182 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:01:42.214600 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:01:42.219708 systemd-logind[1475]: New session 7 of user core. May 15 00:01:42.226141 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 00:01:42.414499 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:01:42.414810 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:01:43.366322 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 00:01:43.366576 (dockerd)[1729]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 00:01:44.213480 dockerd[1729]: time="2025-05-15T00:01:44.213342650Z" level=info msg="Starting up" May 15 00:01:44.429712 dockerd[1729]: time="2025-05-15T00:01:44.429610770Z" level=info msg="Loading containers: start." May 15 00:01:44.614321 kernel: Initializing XFRM netlink socket May 15 00:01:44.740566 systemd-networkd[1386]: docker0: Link UP May 15 00:01:44.774049 dockerd[1729]: time="2025-05-15T00:01:44.773868100Z" level=info msg="Loading containers: done." May 15 00:01:44.803856 dockerd[1729]: time="2025-05-15T00:01:44.803767550Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:01:44.804128 dockerd[1729]: time="2025-05-15T00:01:44.804045560Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 15 00:01:44.804322 dockerd[1729]: time="2025-05-15T00:01:44.804293520Z" level=info msg="Daemon has completed initialization" May 15 00:01:44.807156 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3955624969-merged.mount: Deactivated successfully. May 15 00:01:44.843366 dockerd[1729]: time="2025-05-15T00:01:44.842934610Z" level=info msg="API listen on /run/docker.sock" May 15 00:01:44.843665 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 00:01:45.833951 containerd[1498]: time="2025-05-15T00:01:45.833786040Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 00:01:46.470360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2215474516.mount: Deactivated successfully. May 15 00:01:48.050006 containerd[1498]: time="2025-05-15T00:01:48.049882170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:48.051203 containerd[1498]: time="2025-05-15T00:01:48.050987620Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 15 00:01:48.053051 containerd[1498]: time="2025-05-15T00:01:48.051752150Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:48.054206 containerd[1498]: time="2025-05-15T00:01:48.054179930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:48.055096 containerd[1498]: time="2025-05-15T00:01:48.055068620Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.22110219s" May 15 00:01:48.055154 containerd[1498]: time="2025-05-15T00:01:48.055116990Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 15 00:01:48.055962 containerd[1498]: time="2025-05-15T00:01:48.055941270Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 00:01:49.748159 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:01:49.765110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:01:50.172180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:01:50.176795 (kubelet)[1984]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:01:50.352877 kubelet[1984]: E0515 00:01:50.352790 1984 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:01:50.363482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:01:50.364003 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:01:50.365639 systemd[1]: kubelet.service: Consumed 540ms CPU time, 106.6M memory peak. May 15 00:01:50.410998 containerd[1498]: time="2025-05-15T00:01:50.410524430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:50.412270 containerd[1498]: time="2025-05-15T00:01:50.412012500Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 15 00:01:50.413776 containerd[1498]: time="2025-05-15T00:01:50.413720520Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:50.419095 containerd[1498]: time="2025-05-15T00:01:50.419012000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:50.421746 containerd[1498]: time="2025-05-15T00:01:50.420374400Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.36433816s" May 15 00:01:50.421746 containerd[1498]: time="2025-05-15T00:01:50.420522960Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 15 00:01:50.426776 containerd[1498]: time="2025-05-15T00:01:50.426635430Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 00:01:53.133084 containerd[1498]: time="2025-05-15T00:01:53.124269180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:53.133084 containerd[1498]: time="2025-05-15T00:01:53.125568700Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 15 00:01:53.133084 containerd[1498]: time="2025-05-15T00:01:53.126478220Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:53.133084 containerd[1498]: time="2025-05-15T00:01:53.129312820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:53.133084 containerd[1498]: time="2025-05-15T00:01:53.130501740Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 2.70381387s" May 15 00:01:53.133084 containerd[1498]: time="2025-05-15T00:01:53.130567140Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 15 00:01:53.136686 containerd[1498]: time="2025-05-15T00:01:53.136622690Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 00:01:54.997517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823134430.mount: Deactivated successfully. May 15 00:01:55.730852 containerd[1498]: time="2025-05-15T00:01:55.730693420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:55.732747 containerd[1498]: time="2025-05-15T00:01:55.732347080Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 15 00:01:55.733401 containerd[1498]: time="2025-05-15T00:01:55.733122180Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:55.735682 containerd[1498]: time="2025-05-15T00:01:55.735644420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:55.736230 containerd[1498]: time="2025-05-15T00:01:55.736198130Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.59948978s" May 15 00:01:55.736331 containerd[1498]: time="2025-05-15T00:01:55.736308760Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 15 00:01:55.738855 containerd[1498]: time="2025-05-15T00:01:55.738824510Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 00:01:56.290756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2710544424.mount: Deactivated successfully. May 15 00:01:57.266791 containerd[1498]: time="2025-05-15T00:01:57.266738900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:57.269085 containerd[1498]: time="2025-05-15T00:01:57.267998720Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 15 00:01:57.269085 containerd[1498]: time="2025-05-15T00:01:57.268191960Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:57.272663 containerd[1498]: time="2025-05-15T00:01:57.271489790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:57.272663 containerd[1498]: time="2025-05-15T00:01:57.272404880Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.53342134s" May 15 00:01:57.272663 containerd[1498]: time="2025-05-15T00:01:57.272474400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 00:01:57.275606 containerd[1498]: time="2025-05-15T00:01:57.275569220Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 00:01:57.837515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3041175517.mount: Deactivated successfully. May 15 00:01:57.842157 containerd[1498]: time="2025-05-15T00:01:57.842093750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:57.843397 containerd[1498]: time="2025-05-15T00:01:57.843321890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 00:01:57.843983 containerd[1498]: time="2025-05-15T00:01:57.843920570Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:57.847015 containerd[1498]: time="2025-05-15T00:01:57.846161900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:01:57.847015 containerd[1498]: time="2025-05-15T00:01:57.846784950Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 571.11306ms" May 15 00:01:57.847015 containerd[1498]: time="2025-05-15T00:01:57.846880400Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 00:01:57.847956 containerd[1498]: time="2025-05-15T00:01:57.847911980Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 00:01:58.461724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1607827415.mount: Deactivated successfully. May 15 00:02:00.615257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 00:02:00.623210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:02:00.639620 containerd[1498]: time="2025-05-15T00:02:00.639581590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:02:00.641777 containerd[1498]: time="2025-05-15T00:02:00.641517640Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 15 00:02:00.643743 containerd[1498]: time="2025-05-15T00:02:00.643719900Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:02:00.648956 containerd[1498]: time="2025-05-15T00:02:00.647186890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:02:00.649738 containerd[1498]: time="2025-05-15T00:02:00.648921340Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.80098448s" May 15 00:02:00.649738 containerd[1498]: time="2025-05-15T00:02:00.649484090Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 00:02:01.021371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:02:01.021549 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:02:01.109479 kubelet[2134]: E0515 00:02:01.109384 2134 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:02:01.112986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:02:01.113249 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:02:01.114371 systemd[1]: kubelet.service: Consumed 428ms CPU time, 106M memory peak. May 15 00:02:02.572584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:02:02.573166 systemd[1]: kubelet.service: Consumed 428ms CPU time, 106M memory peak. May 15 00:02:02.585214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:02:02.615317 systemd[1]: Reload requested from client PID 2159 ('systemctl') (unit session-7.scope)... May 15 00:02:02.615476 systemd[1]: Reloading... May 15 00:02:02.787122 zram_generator::config[2205]: No configuration found. May 15 00:02:02.906619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:02:02.995397 systemd[1]: Reloading finished in 379 ms. May 15 00:02:03.038750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:02:03.043843 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:02:03.045844 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:02:03.046749 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:02:03.047069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:02:03.047112 systemd[1]: kubelet.service: Consumed 283ms CPU time, 91.8M memory peak. May 15 00:02:03.055277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:02:03.213077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:02:03.216714 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:02:03.277466 kubelet[2261]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:02:03.279615 kubelet[2261]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 00:02:03.279615 kubelet[2261]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:02:03.279615 kubelet[2261]: I0515 00:02:03.278096 2261 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:02:03.599306 kubelet[2261]: I0515 00:02:03.598095 2261 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 00:02:03.599306 kubelet[2261]: I0515 00:02:03.598124 2261 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:02:03.599306 kubelet[2261]: I0515 00:02:03.598350 2261 server.go:954] "Client rotation is on, will bootstrap in background" May 15 00:02:03.627068 kubelet[2261]: E0515 00:02:03.627019 2261 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.237.148.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.148.154:6443: connect: connection refused" logger="UnhandledError" May 15 00:02:03.627713 kubelet[2261]: I0515 00:02:03.627692 2261 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:02:03.638661 kubelet[2261]: E0515 00:02:03.638625 2261 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:02:03.638661 kubelet[2261]: I0515 00:02:03.638654 2261 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:02:03.642269 kubelet[2261]: I0515 00:02:03.642246 2261 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:02:03.643765 kubelet[2261]: I0515 00:02:03.643723 2261 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:02:03.643955 kubelet[2261]: I0515 00:02:03.643757 2261 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-148-154","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:02:03.644185 kubelet[2261]: I0515 00:02:03.643976 2261 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:02:03.644185 kubelet[2261]: I0515 00:02:03.643985 2261 container_manager_linux.go:304] "Creating device plugin manager" May 15 00:02:03.644185 kubelet[2261]: I0515 00:02:03.644174 2261 state_mem.go:36] "Initialized new in-memory state store" May 15 00:02:03.647997 kubelet[2261]: I0515 00:02:03.647980 2261 kubelet.go:446] "Attempting to sync node with API server" May 15 00:02:03.648208 kubelet[2261]: I0515 00:02:03.647998 2261 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:02:03.648208 kubelet[2261]: I0515 00:02:03.648047 2261 kubelet.go:352] "Adding apiserver pod source" May 15 00:02:03.648208 kubelet[2261]: I0515 00:02:03.648060 2261 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:02:03.651888 kubelet[2261]: W0515 00:02:03.651838 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.148.154:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-148-154&limit=500&resourceVersion=0": dial tcp 172.237.148.154:6443: connect: connection refused May 15 00:02:03.652058 kubelet[2261]: E0515 00:02:03.651983 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.148.154:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-148-154&limit=500&resourceVersion=0\": dial tcp 172.237.148.154:6443: connect: connection refused" logger="UnhandledError" May 15 00:02:03.652690 kubelet[2261]: I0515 00:02:03.652156 2261 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:02:03.652690 kubelet[2261]: I0515 00:02:03.652514 2261 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:02:03.653136 kubelet[2261]: W0515 00:02:03.653122 2261 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:02:03.656267 kubelet[2261]: I0515 00:02:03.656016 2261 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 00:02:03.656267 kubelet[2261]: I0515 00:02:03.656084 2261 server.go:1287] "Started kubelet" May 15 00:02:03.661086 kubelet[2261]: I0515 00:02:03.660921 2261 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:02:03.661724 kubelet[2261]: W0515 00:02:03.661680 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.148.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.148.154:6443: connect: connection refused May 15 00:02:03.661772 kubelet[2261]: E0515 00:02:03.661727 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.148.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.148.154:6443: connect: connection refused" logger="UnhandledError" May 15 00:02:03.662977 kubelet[2261]: E0515 00:02:03.661767 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.148.154:6443/api/v1/namespaces/default/events\": dial tcp 172.237.148.154:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-148-154.183f8a5f9a2ac17e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-148-154,UID:172-237-148-154,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-148-154,},FirstTimestamp:2025-05-15 00:02:03.65606131 +0000 UTC m=+0.430260371,LastTimestamp:2025-05-15 00:02:03.65606131 +0000 UTC m=+0.430260371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-148-154,}" May 15 00:02:03.665850 kubelet[2261]: I0515 00:02:03.665808 2261 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:02:03.667083 kubelet[2261]: I0515 00:02:03.666741 2261 server.go:490] "Adding debug handlers to kubelet server" May 15 00:02:03.667509 kubelet[2261]: I0515 00:02:03.667442 2261 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:02:03.667690 kubelet[2261]: I0515 00:02:03.667669 2261 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:02:03.667849 kubelet[2261]: I0515 00:02:03.667836 2261 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 00:02:03.668213 kubelet[2261]: E0515 00:02:03.668196 2261 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-237-148-154\" not found" May 15 00:02:03.674356 kubelet[2261]: E0515 00:02:03.669083 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.148.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-148-154?timeout=10s\": dial tcp 172.237.148.154:6443: connect: connection refused" interval="200ms" May 15 00:02:03.674664 kubelet[2261]: I0515 00:02:03.674607 2261 factory.go:221] Registration of the systemd container factory successfully May 15 00:02:03.674823 kubelet[2261]: I0515 00:02:03.674792 2261 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:02:03.676842 kubelet[2261]: I0515 00:02:03.667846 2261 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:02:03.677135 kubelet[2261]: I0515 00:02:03.677120 2261 reconciler.go:26] "Reconciler: start to sync state" May 15 00:02:03.677237 kubelet[2261]: I0515 00:02:03.677227 2261 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:02:03.677781 kubelet[2261]: W0515 00:02:03.677753 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.148.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.148.154:6443: connect: connection refused May 15 00:02:03.677883 kubelet[2261]: E0515 00:02:03.677864 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.148.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.148.154:6443: connect: connection refused" logger="UnhandledError" May 15 00:02:03.678305 kubelet[2261]: I0515 00:02:03.678288 2261 factory.go:221] Registration of the containerd container factory successfully May 15 00:02:03.697096 kubelet[2261]: I0515 00:02:03.697040 2261 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:02:03.698533 kubelet[2261]: I0515 00:02:03.698507 2261 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:02:03.698786 kubelet[2261]: I0515 00:02:03.698761 2261 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 00:02:03.698831 kubelet[2261]: I0515 00:02:03.698812 2261 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 00:02:03.698831 kubelet[2261]: I0515 00:02:03.698832 2261 kubelet.go:2388] "Starting kubelet main sync loop" May 15 00:02:03.698937 kubelet[2261]: E0515 00:02:03.698903 2261 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:02:03.705233 kubelet[2261]: W0515 00:02:03.705203 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.148.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.148.154:6443: connect: connection refused May 15 00:02:03.705332 kubelet[2261]: E0515 00:02:03.705316 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.148.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.148.154:6443: connect: connection refused" logger="UnhandledError" May 15 00:02:03.706576 kubelet[2261]: I0515 00:02:03.706547 2261 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 00:02:03.706576 kubelet[2261]: I0515 00:02:03.706568 2261 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 00:02:03.706660 kubelet[2261]: I0515 00:02:03.706590 2261 state_mem.go:36] "Initialized new in-memory state store" May 15 00:02:03.708353 kubelet[2261]: I0515 00:02:03.708323 2261 policy_none.go:49] "None policy: Start" May 15 00:02:03.708415 kubelet[2261]: I0515 00:02:03.708366 2261 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 00:02:03.708415 kubelet[2261]: I0515 00:02:03.708391 2261 state_mem.go:35] "Initializing new in-memory state store" May 15 00:02:03.717582 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 00:02:03.730748 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 00:02:03.734506 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 00:02:03.742379 kubelet[2261]: I0515 00:02:03.742321 2261 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:02:03.743443 kubelet[2261]: I0515 00:02:03.743068 2261 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:02:03.743443 kubelet[2261]: I0515 00:02:03.743099 2261 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:02:03.743443 kubelet[2261]: I0515 00:02:03.743347 2261 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:02:03.746133 kubelet[2261]: E0515 00:02:03.746115 2261 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 00:02:03.746576 kubelet[2261]: E0515 00:02:03.746396 2261 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-148-154\" not found" May 15 00:02:03.811536 systemd[1]: Created slice kubepods-burstable-pod6f4c17b8b19f6a9da6961b455a969c9e.slice - libcontainer container kubepods-burstable-pod6f4c17b8b19f6a9da6961b455a969c9e.slice. May 15 00:02:03.823859 kubelet[2261]: E0515 00:02:03.823810 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-148-154\" not found" node="172-237-148-154" May 15 00:02:03.826319 systemd[1]: Created slice kubepods-burstable-pod100e4532d16598bbbc9e83dd5a8638c4.slice - libcontainer container kubepods-burstable-pod100e4532d16598bbbc9e83dd5a8638c4.slice. May 15 00:02:03.834286 kubelet[2261]: E0515 00:02:03.834257 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-148-154\" not found" node="172-237-148-154" May 15 00:02:03.837628 systemd[1]: Created slice kubepods-burstable-pod4e38d60ec3a6bbba332dc29779556aee.slice - libcontainer container kubepods-burstable-pod4e38d60ec3a6bbba332dc29779556aee.slice. May 15 00:02:03.839815 kubelet[2261]: E0515 00:02:03.839789 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-148-154\" not found" node="172-237-148-154" May 15 00:02:03.845142 kubelet[2261]: I0515 00:02:03.845107 2261 kubelet_node_status.go:76] "Attempting to register node" node="172-237-148-154" May 15 00:02:03.845737 kubelet[2261]: E0515 00:02:03.845704 2261 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.237.148.154:6443/api/v1/nodes\": dial tcp 172.237.148.154:6443: connect: connection refused" node="172-237-148-154" May 15 00:02:03.870301 kubelet[2261]: E0515 00:02:03.870213 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.148.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-148-154?timeout=10s\": dial tcp 172.237.148.154:6443: connect: connection refused" interval="400ms" May 15 00:02:03.878910 kubelet[2261]: I0515 00:02:03.878858 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/100e4532d16598bbbc9e83dd5a8638c4-ca-certs\") pod \"kube-controller-manager-172-237-148-154\" (UID: \"100e4532d16598bbbc9e83dd5a8638c4\") " pod="kube-system/kube-controller-manager-172-237-148-154" May 15 00:02:03.878910 kubelet[2261]: I0515 00:02:03.878902 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/100e4532d16598bbbc9e83dd5a8638c4-kubeconfig\") pod \"kube-controller-manager-172-237-148-154\" (UID: \"100e4532d16598bbbc9e83dd5a8638c4\") " pod="kube-system/kube-controller-manager-172-237-148-154" May 15 00:02:03.879043 kubelet[2261]: I0515 00:02:03.878935 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/100e4532d16598bbbc9e83dd5a8638c4-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-148-154\" (UID: \"100e4532d16598bbbc9e83dd5a8638c4\") " pod="kube-system/kube-controller-manager-172-237-148-154" May 15 00:02:03.879043 kubelet[2261]: I0515 00:02:03.878956 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f4c17b8b19f6a9da6961b455a969c9e-k8s-certs\") pod \"kube-apiserver-172-237-148-154\" (UID: \"6f4c17b8b19f6a9da6961b455a969c9e\") " pod="kube-system/kube-apiserver-172-237-148-154" May 15 00:02:03.879043 kubelet[2261]: I0515 00:02:03.878971 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f4c17b8b19f6a9da6961b455a969c9e-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-148-154\" (UID: \"6f4c17b8b19f6a9da6961b455a969c9e\") " pod="kube-system/kube-apiserver-172-237-148-154" May 15 00:02:03.879043 kubelet[2261]: I0515 00:02:03.878986 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/100e4532d16598bbbc9e83dd5a8638c4-k8s-certs\") pod \"kube-controller-manager-172-237-148-154\" (UID: \"100e4532d16598bbbc9e83dd5a8638c4\") " pod="kube-system/kube-controller-manager-172-237-148-154" May 15 00:02:03.879043 kubelet[2261]: I0515 00:02:03.879014 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e38d60ec3a6bbba332dc29779556aee-kubeconfig\") pod \"kube-scheduler-172-237-148-154\" (UID: \"4e38d60ec3a6bbba332dc29779556aee\") " pod="kube-system/kube-scheduler-172-237-148-154" May 15 00:02:03.879172 kubelet[2261]: I0515 00:02:03.879052 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f4c17b8b19f6a9da6961b455a969c9e-ca-certs\") pod \"kube-apiserver-172-237-148-154\" (UID: \"6f4c17b8b19f6a9da6961b455a969c9e\") " pod="kube-system/kube-apiserver-172-237-148-154" May 15 00:02:03.879172 kubelet[2261]: I0515 00:02:03.879069 2261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/100e4532d16598bbbc9e83dd5a8638c4-flexvolume-dir\") pod \"kube-controller-manager-172-237-148-154\" (UID: \"100e4532d16598bbbc9e83dd5a8638c4\") " pod="kube-system/kube-controller-manager-172-237-148-154" May 15 00:02:04.048623 kubelet[2261]: I0515 00:02:04.048569 2261 kubelet_node_status.go:76] "Attempting to register node" node="172-237-148-154" May 15 00:02:04.049068 kubelet[2261]: E0515 00:02:04.048990 2261 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.237.148.154:6443/api/v1/nodes\": dial tcp 172.237.148.154:6443: connect: connection refused" node="172-237-148-154" May 15 00:02:04.125215 kubelet[2261]: E0515 00:02:04.125081 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:04.126730 containerd[1498]: time="2025-05-15T00:02:04.126239690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-148-154,Uid:6f4c17b8b19f6a9da6961b455a969c9e,Namespace:kube-system,Attempt:0,}" May 15 00:02:04.135126 kubelet[2261]: E0515 00:02:04.135059 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:04.136019 containerd[1498]: time="2025-05-15T00:02:04.135960240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-148-154,Uid:100e4532d16598bbbc9e83dd5a8638c4,Namespace:kube-system,Attempt:0,}" May 15 00:02:04.140961 kubelet[2261]: E0515 00:02:04.140898 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:04.141667 containerd[1498]: time="2025-05-15T00:02:04.141627080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-148-154,Uid:4e38d60ec3a6bbba332dc29779556aee,Namespace:kube-system,Attempt:0,}" May 15 00:02:04.270923 kubelet[2261]: E0515 00:02:04.270847 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.148.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-148-154?timeout=10s\": dial tcp 172.237.148.154:6443: connect: connection refused" interval="800ms" May 15 00:02:04.450718 kubelet[2261]: I0515 00:02:04.450622 2261 kubelet_node_status.go:76] "Attempting to register node" node="172-237-148-154" May 15 00:02:04.451219 kubelet[2261]: E0515 00:02:04.451188 2261 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.237.148.154:6443/api/v1/nodes\": dial tcp 172.237.148.154:6443: connect: connection refused" node="172-237-148-154" May 15 00:02:04.578535 kubelet[2261]: W0515 00:02:04.578482 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.148.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.148.154:6443: connect: connection refused May 15 00:02:04.578535 kubelet[2261]: E0515 00:02:04.578537 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.148.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.148.154:6443: connect: connection refused" logger="UnhandledError" May 15 00:02:04.630201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3062383126.mount: Deactivated successfully. May 15 00:02:04.635101 containerd[1498]: time="2025-05-15T00:02:04.635064750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:02:04.637660 containerd[1498]: time="2025-05-15T00:02:04.637380540Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 15 00:02:04.638149 containerd[1498]: time="2025-05-15T00:02:04.638120300Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:02:04.639566 containerd[1498]: time="2025-05-15T00:02:04.639526630Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:02:04.643558 containerd[1498]: time="2025-05-15T00:02:04.641899940Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:02:04.643558 containerd[1498]: time="2025-05-15T00:02:04.642958810Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:02:04.643558 containerd[1498]: time="2025-05-15T00:02:04.643016840Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:02:04.645352 containerd[1498]: time="2025-05-15T00:02:04.645301530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 518.38656ms" May 15 00:02:04.646091 containerd[1498]: time="2025-05-15T00:02:04.646066120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:02:04.646744 containerd[1498]: time="2025-05-15T00:02:04.646722890Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.60162ms" May 15 00:02:04.648546 containerd[1498]: time="2025-05-15T00:02:04.648515820Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 506.79536ms" May 15 00:02:05.005193 kubelet[2261]: W0515 00:02:04.997487 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.148.154:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-148-154&limit=500&resourceVersion=0": dial tcp 172.237.148.154:6443: connect: connection refused May 15 00:02:05.005193 kubelet[2261]: E0515 00:02:04.997554 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.148.154:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-148-154&limit=500&resourceVersion=0\": dial tcp 172.237.148.154:6443: connect: connection refused" logger="UnhandledError" May 15 00:02:05.005325 containerd[1498]: time="2025-05-15T00:02:05.000728020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:02:05.005325 containerd[1498]: time="2025-05-15T00:02:05.000888910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:02:05.005325 containerd[1498]: time="2025-05-15T00:02:05.000919950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:05.005325 containerd[1498]: time="2025-05-15T00:02:05.001186540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:05.011474 kubelet[2261]: W0515 00:02:05.011154 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.148.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.148.154:6443: connect: connection refused May 15 00:02:05.011474 kubelet[2261]: E0515 00:02:05.011201 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.148.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.148.154:6443: connect: connection refused" logger="UnhandledError" May 15 00:02:05.021054 containerd[1498]: time="2025-05-15T00:02:05.020828690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:02:05.021054 containerd[1498]: time="2025-05-15T00:02:05.020887660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:02:05.021054 containerd[1498]: time="2025-05-15T00:02:05.020898730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:05.021054 containerd[1498]: time="2025-05-15T00:02:05.020974840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:05.054174 systemd[1]: Started cri-containerd-f4d473d156c1457fa551c82478061a6b7769180e61c8ee323a064895f0a6b959.scope - libcontainer container f4d473d156c1457fa551c82478061a6b7769180e61c8ee323a064895f0a6b959. May 15 00:02:05.068494 containerd[1498]: time="2025-05-15T00:02:05.066978600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:02:05.071401 kubelet[2261]: E0515 00:02:05.071362 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.148.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-148-154?timeout=10s\": dial tcp 172.237.148.154:6443: connect: connection refused" interval="1.6s" May 15 00:02:05.072084 containerd[1498]: time="2025-05-15T00:02:05.071962030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:02:05.072084 containerd[1498]: time="2025-05-15T00:02:05.071983280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:05.072316 containerd[1498]: time="2025-05-15T00:02:05.072198260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:05.111265 systemd[1]: Started cri-containerd-f15a2bf408fcef0065126b6db0fd9af7f452627d834b19f18fba9cbdc5232d38.scope - libcontainer container f15a2bf408fcef0065126b6db0fd9af7f452627d834b19f18fba9cbdc5232d38. May 15 00:02:05.122634 kubelet[2261]: W0515 00:02:05.119359 2261 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.148.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.148.154:6443: connect: connection refused May 15 00:02:05.122634 kubelet[2261]: E0515 00:02:05.119432 2261 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.148.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.148.154:6443: connect: connection refused" logger="UnhandledError" May 15 00:02:05.135196 systemd[1]: Started cri-containerd-17e5f2bdb08fe1667e1d624e4a4824656a840c3f2f7bb2aaf25e81b710ed6df4.scope - libcontainer container 17e5f2bdb08fe1667e1d624e4a4824656a840c3f2f7bb2aaf25e81b710ed6df4. May 15 00:02:05.209335 containerd[1498]: time="2025-05-15T00:02:05.204537330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-148-154,Uid:4e38d60ec3a6bbba332dc29779556aee,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4d473d156c1457fa551c82478061a6b7769180e61c8ee323a064895f0a6b959\"" May 15 00:02:05.211613 kubelet[2261]: E0515 00:02:05.211535 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:05.223760 containerd[1498]: time="2025-05-15T00:02:05.223734530Z" level=info msg="CreateContainer within sandbox \"f4d473d156c1457fa551c82478061a6b7769180e61c8ee323a064895f0a6b959\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:02:05.237262 containerd[1498]: time="2025-05-15T00:02:05.237238070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-148-154,Uid:100e4532d16598bbbc9e83dd5a8638c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f15a2bf408fcef0065126b6db0fd9af7f452627d834b19f18fba9cbdc5232d38\"" May 15 00:02:05.237889 kubelet[2261]: E0515 00:02:05.237871 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:05.240640 containerd[1498]: time="2025-05-15T00:02:05.240608120Z" level=info msg="CreateContainer within sandbox \"f15a2bf408fcef0065126b6db0fd9af7f452627d834b19f18fba9cbdc5232d38\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:02:05.249742 containerd[1498]: time="2025-05-15T00:02:05.249708670Z" level=info msg="CreateContainer within sandbox \"f4d473d156c1457fa551c82478061a6b7769180e61c8ee323a064895f0a6b959\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e1b43c2ee089d39e3a6b60b13baac71a763b74da91a9e97bed4fb0a591b51a2\"" May 15 00:02:05.253162 containerd[1498]: time="2025-05-15T00:02:05.252121070Z" level=info msg="StartContainer for \"1e1b43c2ee089d39e3a6b60b13baac71a763b74da91a9e97bed4fb0a591b51a2\"" May 15 00:02:05.253322 containerd[1498]: time="2025-05-15T00:02:05.252554250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-148-154,Uid:6f4c17b8b19f6a9da6961b455a969c9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"17e5f2bdb08fe1667e1d624e4a4824656a840c3f2f7bb2aaf25e81b710ed6df4\"" May 15 00:02:05.254613 kubelet[2261]: I0515 00:02:05.254596 2261 kubelet_node_status.go:76] "Attempting to register node" node="172-237-148-154" May 15 00:02:05.254849 containerd[1498]: time="2025-05-15T00:02:05.254823900Z" level=info msg="CreateContainer within sandbox \"f15a2bf408fcef0065126b6db0fd9af7f452627d834b19f18fba9cbdc5232d38\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"42b4760f348afb465d00b71003c37e7f76b349e3f2dc5d602017ed03402b91f3\"" May 15 00:02:05.255823 kubelet[2261]: E0515 00:02:05.254596 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:05.256234 containerd[1498]: time="2025-05-15T00:02:05.256203480Z" level=info msg="StartContainer for \"42b4760f348afb465d00b71003c37e7f76b349e3f2dc5d602017ed03402b91f3\"" May 15 00:02:05.256524 kubelet[2261]: E0515 00:02:05.256196 2261 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.237.148.154:6443/api/v1/nodes\": dial tcp 172.237.148.154:6443: connect: connection refused" node="172-237-148-154" May 15 00:02:05.260953 containerd[1498]: time="2025-05-15T00:02:05.260930710Z" level=info msg="CreateContainer within sandbox \"17e5f2bdb08fe1667e1d624e4a4824656a840c3f2f7bb2aaf25e81b710ed6df4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:02:05.279596 containerd[1498]: time="2025-05-15T00:02:05.279564520Z" level=info msg="CreateContainer within sandbox \"17e5f2bdb08fe1667e1d624e4a4824656a840c3f2f7bb2aaf25e81b710ed6df4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f2ff15d7e608de046691115ad71387a72a0d44c2c8be8ac3afd865f3d1b076e5\"" May 15 00:02:05.280091 containerd[1498]: time="2025-05-15T00:02:05.280059810Z" level=info msg="StartContainer for \"f2ff15d7e608de046691115ad71387a72a0d44c2c8be8ac3afd865f3d1b076e5\"" May 15 00:02:05.298561 systemd[1]: Started cri-containerd-42b4760f348afb465d00b71003c37e7f76b349e3f2dc5d602017ed03402b91f3.scope - libcontainer container 42b4760f348afb465d00b71003c37e7f76b349e3f2dc5d602017ed03402b91f3. May 15 00:02:05.312139 systemd[1]: Started cri-containerd-1e1b43c2ee089d39e3a6b60b13baac71a763b74da91a9e97bed4fb0a591b51a2.scope - libcontainer container 1e1b43c2ee089d39e3a6b60b13baac71a763b74da91a9e97bed4fb0a591b51a2. May 15 00:02:05.328863 systemd[1]: Started cri-containerd-f2ff15d7e608de046691115ad71387a72a0d44c2c8be8ac3afd865f3d1b076e5.scope - libcontainer container f2ff15d7e608de046691115ad71387a72a0d44c2c8be8ac3afd865f3d1b076e5. May 15 00:02:05.378738 containerd[1498]: time="2025-05-15T00:02:05.378698150Z" level=info msg="StartContainer for \"42b4760f348afb465d00b71003c37e7f76b349e3f2dc5d602017ed03402b91f3\" returns successfully" May 15 00:02:05.399187 containerd[1498]: time="2025-05-15T00:02:05.398747250Z" level=info msg="StartContainer for \"1e1b43c2ee089d39e3a6b60b13baac71a763b74da91a9e97bed4fb0a591b51a2\" returns successfully" May 15 00:02:05.422130 containerd[1498]: time="2025-05-15T00:02:05.422097810Z" level=info msg="StartContainer for \"f2ff15d7e608de046691115ad71387a72a0d44c2c8be8ac3afd865f3d1b076e5\" returns successfully" May 15 00:02:05.718175 kubelet[2261]: E0515 00:02:05.716801 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-148-154\" not found" node="172-237-148-154" May 15 00:02:05.718175 kubelet[2261]: E0515 00:02:05.716925 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:05.718175 kubelet[2261]: E0515 00:02:05.718085 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-148-154\" not found" node="172-237-148-154" May 15 00:02:05.718721 kubelet[2261]: E0515 00:02:05.718334 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:05.719538 kubelet[2261]: E0515 00:02:05.719519 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-148-154\" not found" node="172-237-148-154" May 15 00:02:05.719700 kubelet[2261]: E0515 00:02:05.719649 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:05.887410 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 15 00:02:06.921669 kubelet[2261]: I0515 00:02:06.921608 2261 kubelet_node_status.go:76] "Attempting to register node" node="172-237-148-154" May 15 00:02:06.925364 kubelet[2261]: E0515 00:02:06.925328 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-148-154\" not found" node="172-237-148-154" May 15 00:02:06.925671 kubelet[2261]: E0515 00:02:06.925651 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:06.926905 kubelet[2261]: E0515 00:02:06.926884 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-148-154\" not found" node="172-237-148-154" May 15 00:02:06.926999 kubelet[2261]: E0515 00:02:06.926981 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:07.932050 kubelet[2261]: E0515 00:02:07.931678 2261 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-148-154\" not found" node="172-237-148-154" May 15 00:02:07.932050 kubelet[2261]: E0515 00:02:07.931921 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:08.671422 kubelet[2261]: E0515 00:02:08.671387 2261 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-237-148-154\" not found" node="172-237-148-154" May 15 00:02:08.702791 kubelet[2261]: I0515 00:02:08.702375 2261 kubelet_node_status.go:79] "Successfully registered node" node="172-237-148-154" May 15 00:02:08.769453 kubelet[2261]: I0515 00:02:08.769411 2261 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-148-154" May 15 00:02:08.810392 kubelet[2261]: E0515 00:02:08.810351 2261 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-148-154\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-148-154" May 15 00:02:08.810392 kubelet[2261]: I0515 00:02:08.810386 2261 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-148-154" May 15 00:02:08.813827 kubelet[2261]: E0515 00:02:08.813798 2261 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-148-154\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-237-148-154" May 15 00:02:08.813827 kubelet[2261]: I0515 00:02:08.813820 2261 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-148-154" May 15 00:02:08.816632 kubelet[2261]: E0515 00:02:08.816059 2261 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-148-154\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-148-154" May 15 00:02:08.899639 kubelet[2261]: I0515 00:02:08.899610 2261 apiserver.go:52] "Watching apiserver" May 15 00:02:08.977905 kubelet[2261]: I0515 00:02:08.977860 2261 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:02:10.683468 systemd[1]: Reload requested from client PID 2535 ('systemctl') (unit session-7.scope)... May 15 00:02:10.683489 systemd[1]: Reloading... May 15 00:02:11.277966 zram_generator::config[2579]: No configuration found. May 15 00:02:11.452945 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:02:11.563663 systemd[1]: Reloading finished in 879 ms. May 15 00:02:11.600120 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:02:11.623889 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:02:11.624447 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:02:11.624934 systemd[1]: kubelet.service: Consumed 982ms CPU time, 127.1M memory peak. May 15 00:02:11.633262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:02:11.946014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:02:11.959416 (kubelet)[2630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:02:12.021664 kubelet[2630]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:02:12.021664 kubelet[2630]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 00:02:12.021664 kubelet[2630]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:02:12.022956 kubelet[2630]: I0515 00:02:12.021686 2630 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:02:12.033521 kubelet[2630]: I0515 00:02:12.032899 2630 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 00:02:12.033521 kubelet[2630]: I0515 00:02:12.032921 2630 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:02:12.036480 kubelet[2630]: I0515 00:02:12.036453 2630 server.go:954] "Client rotation is on, will bootstrap in background" May 15 00:02:12.037802 kubelet[2630]: I0515 00:02:12.037772 2630 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:02:12.041858 kubelet[2630]: I0515 00:02:12.041295 2630 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:02:12.050687 kubelet[2630]: E0515 00:02:12.050634 2630 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:02:12.050687 kubelet[2630]: I0515 00:02:12.050674 2630 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:02:12.054045 sudo[2645]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 00:02:12.054970 sudo[2645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 00:02:12.055553 kubelet[2630]: I0515 00:02:12.055226 2630 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:02:12.056015 kubelet[2630]: I0515 00:02:12.055584 2630 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:02:12.056015 kubelet[2630]: I0515 00:02:12.055616 2630 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-148-154","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:02:12.056015 kubelet[2630]: I0515 00:02:12.055844 2630 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:02:12.056015 kubelet[2630]: I0515 00:02:12.055854 2630 container_manager_linux.go:304] "Creating device plugin manager" May 15 00:02:12.056364 kubelet[2630]: I0515 00:02:12.056184 2630 state_mem.go:36] "Initialized new in-memory state store" May 15 00:02:12.057443 kubelet[2630]: I0515 00:02:12.056442 2630 kubelet.go:446] "Attempting to sync node with API server" May 15 00:02:12.057662 kubelet[2630]: I0515 00:02:12.057645 2630 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:02:12.057808 kubelet[2630]: I0515 00:02:12.057797 2630 kubelet.go:352] "Adding apiserver pod source" May 15 00:02:12.057961 kubelet[2630]: I0515 00:02:12.057941 2630 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:02:12.065291 kubelet[2630]: I0515 00:02:12.065262 2630 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:02:12.065868 kubelet[2630]: I0515 00:02:12.065844 2630 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:02:12.069852 kubelet[2630]: I0515 00:02:12.069610 2630 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 00:02:12.069852 kubelet[2630]: I0515 00:02:12.069846 2630 server.go:1287] "Started kubelet" May 15 00:02:12.074199 kubelet[2630]: I0515 00:02:12.074094 2630 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:02:12.074754 kubelet[2630]: I0515 00:02:12.074713 2630 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:02:12.075066 kubelet[2630]: I0515 00:02:12.074932 2630 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:02:12.077088 kubelet[2630]: I0515 00:02:12.076023 2630 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:02:12.102046 kubelet[2630]: I0515 00:02:12.100010 2630 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:02:12.105892 kubelet[2630]: I0515 00:02:12.105876 2630 server.go:490] "Adding debug handlers to kubelet server" May 15 00:02:12.118240 kubelet[2630]: I0515 00:02:12.109811 2630 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 00:02:12.118539 kubelet[2630]: I0515 00:02:12.109842 2630 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:02:12.118696 kubelet[2630]: E0515 00:02:12.109967 2630 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-237-148-154\" not found" May 15 00:02:12.119067 kubelet[2630]: I0515 00:02:12.119055 2630 reconciler.go:26] "Reconciler: start to sync state" May 15 00:02:12.121685 kubelet[2630]: I0515 00:02:12.121646 2630 factory.go:221] Registration of the systemd container factory successfully May 15 00:02:12.121855 kubelet[2630]: I0515 00:02:12.121835 2630 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:02:12.127838 kubelet[2630]: I0515 00:02:12.127807 2630 factory.go:221] Registration of the containerd container factory successfully May 15 00:02:12.131792 kubelet[2630]: I0515 00:02:12.131763 2630 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:02:12.133370 kubelet[2630]: I0515 00:02:12.133355 2630 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:02:12.133467 kubelet[2630]: I0515 00:02:12.133456 2630 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 00:02:12.133600 kubelet[2630]: I0515 00:02:12.133586 2630 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 00:02:12.133670 kubelet[2630]: I0515 00:02:12.133660 2630 kubelet.go:2388] "Starting kubelet main sync loop" May 15 00:02:12.133816 kubelet[2630]: E0515 00:02:12.133798 2630 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:02:12.149003 kubelet[2630]: E0515 00:02:12.148983 2630 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:02:12.241842 kubelet[2630]: I0515 00:02:12.241616 2630 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 00:02:12.242111 kubelet[2630]: I0515 00:02:12.242013 2630 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 00:02:12.242186 kubelet[2630]: I0515 00:02:12.242175 2630 state_mem.go:36] "Initialized new in-memory state store" May 15 00:02:12.242426 kubelet[2630]: I0515 00:02:12.242404 2630 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:02:12.242972 kubelet[2630]: E0515 00:02:12.242530 2630 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:02:12.243104 kubelet[2630]: I0515 00:02:12.242938 2630 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:02:12.243203 kubelet[2630]: I0515 00:02:12.243192 2630 policy_none.go:49] "None policy: Start" May 15 00:02:12.243281 kubelet[2630]: I0515 00:02:12.243272 2630 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 00:02:12.243359 kubelet[2630]: I0515 00:02:12.243349 2630 state_mem.go:35] "Initializing new in-memory state store" May 15 00:02:12.243727 kubelet[2630]: I0515 00:02:12.243715 2630 state_mem.go:75] "Updated machine memory state" May 15 00:02:12.251391 kubelet[2630]: I0515 00:02:12.251376 2630 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:02:12.251770 kubelet[2630]: I0515 00:02:12.251755 2630 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:02:12.252601 kubelet[2630]: I0515 00:02:12.251952 2630 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:02:12.253371 kubelet[2630]: E0515 00:02:12.252813 2630 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 00:02:12.256208 kubelet[2630]: I0515 00:02:12.256194 2630 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:02:12.401391 kubelet[2630]: I0515 00:02:12.400848 2630 kubelet_node_status.go:76] "Attempting to register node" node="172-237-148-154" May 15 00:02:12.411967 kubelet[2630]: I0515 00:02:12.411882 2630 kubelet_node_status.go:125] "Node was previously registered" node="172-237-148-154" May 15 00:02:12.412269 kubelet[2630]: I0515 00:02:12.412250 2630 kubelet_node_status.go:79] "Successfully registered node" node="172-237-148-154" May 15 00:02:12.445192 kubelet[2630]: I0515 00:02:12.445174 2630 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-148-154" May 15 00:02:12.445911 kubelet[2630]: I0515 00:02:12.445487 2630 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-148-154" May 15 00:02:12.446510 kubelet[2630]: I0515 00:02:12.446172 2630 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-148-154" May 15 00:02:12.528275 kubelet[2630]: I0515 00:02:12.527452 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f4c17b8b19f6a9da6961b455a969c9e-ca-certs\") pod \"kube-apiserver-172-237-148-154\" (UID: \"6f4c17b8b19f6a9da6961b455a969c9e\") " pod="kube-system/kube-apiserver-172-237-148-154" May 15 00:02:12.528871 kubelet[2630]: I0515 00:02:12.528499 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/100e4532d16598bbbc9e83dd5a8638c4-ca-certs\") pod \"kube-controller-manager-172-237-148-154\" (UID: \"100e4532d16598bbbc9e83dd5a8638c4\") " pod="kube-system/kube-controller-manager-172-237-148-154" May 15 00:02:12.528871 kubelet[2630]: I0515 00:02:12.528527 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/100e4532d16598bbbc9e83dd5a8638c4-flexvolume-dir\") pod \"kube-controller-manager-172-237-148-154\" (UID: \"100e4532d16598bbbc9e83dd5a8638c4\") " pod="kube-system/kube-controller-manager-172-237-148-154" May 15 00:02:12.528871 kubelet[2630]: I0515 00:02:12.528573 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/100e4532d16598bbbc9e83dd5a8638c4-k8s-certs\") pod \"kube-controller-manager-172-237-148-154\" (UID: \"100e4532d16598bbbc9e83dd5a8638c4\") " pod="kube-system/kube-controller-manager-172-237-148-154" May 15 00:02:12.528871 kubelet[2630]: I0515 00:02:12.528825 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f4c17b8b19f6a9da6961b455a969c9e-k8s-certs\") pod \"kube-apiserver-172-237-148-154\" (UID: \"6f4c17b8b19f6a9da6961b455a969c9e\") " pod="kube-system/kube-apiserver-172-237-148-154" May 15 00:02:12.531175 kubelet[2630]: I0515 00:02:12.528843 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f4c17b8b19f6a9da6961b455a969c9e-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-148-154\" (UID: \"6f4c17b8b19f6a9da6961b455a969c9e\") " pod="kube-system/kube-apiserver-172-237-148-154" May 15 00:02:12.531175 kubelet[2630]: I0515 00:02:12.530553 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/100e4532d16598bbbc9e83dd5a8638c4-kubeconfig\") pod \"kube-controller-manager-172-237-148-154\" (UID: \"100e4532d16598bbbc9e83dd5a8638c4\") " pod="kube-system/kube-controller-manager-172-237-148-154" May 15 00:02:12.531175 kubelet[2630]: I0515 00:02:12.530571 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/100e4532d16598bbbc9e83dd5a8638c4-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-148-154\" (UID: \"100e4532d16598bbbc9e83dd5a8638c4\") " pod="kube-system/kube-controller-manager-172-237-148-154" May 15 00:02:12.531175 kubelet[2630]: I0515 00:02:12.530586 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e38d60ec3a6bbba332dc29779556aee-kubeconfig\") pod \"kube-scheduler-172-237-148-154\" (UID: \"4e38d60ec3a6bbba332dc29779556aee\") " pod="kube-system/kube-scheduler-172-237-148-154" May 15 00:02:12.791503 kubelet[2630]: E0515 00:02:12.790870 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:12.794481 kubelet[2630]: E0515 00:02:12.791926 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:12.794481 kubelet[2630]: E0515 00:02:12.792313 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:13.049726 sudo[2645]: pam_unix(sudo:session): session closed for user root May 15 00:02:13.061321 kubelet[2630]: I0515 00:02:13.061055 2630 apiserver.go:52] "Watching apiserver" May 15 00:02:13.120186 kubelet[2630]: I0515 00:02:13.120134 2630 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:02:13.172576 kubelet[2630]: E0515 00:02:13.172430 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:13.173774 kubelet[2630]: E0515 00:02:13.172943 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:13.173774 kubelet[2630]: I0515 00:02:13.173665 2630 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-148-154" May 15 00:02:13.184573 kubelet[2630]: E0515 00:02:13.183943 2630 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-148-154\" already exists" pod="kube-system/kube-apiserver-172-237-148-154" May 15 00:02:13.184573 kubelet[2630]: E0515 00:02:13.184076 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:13.210872 kubelet[2630]: I0515 00:02:13.210813 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-148-154" podStartSLOduration=1.210780799 podStartE2EDuration="1.210780799s" podCreationTimestamp="2025-05-15 00:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:02:13.203743646 +0000 UTC m=+1.233862462" watchObservedRunningTime="2025-05-15 00:02:13.210780799 +0000 UTC m=+1.240899625" May 15 00:02:13.212565 kubelet[2630]: I0515 00:02:13.212346 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-148-154" podStartSLOduration=1.212318439 podStartE2EDuration="1.212318439s" podCreationTimestamp="2025-05-15 00:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:02:13.212046849 +0000 UTC m=+1.242165665" watchObservedRunningTime="2025-05-15 00:02:13.212318439 +0000 UTC m=+1.242437265" May 15 00:02:13.222642 kubelet[2630]: I0515 00:02:13.222492 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-148-154" podStartSLOduration=1.222480963 podStartE2EDuration="1.222480963s" podCreationTimestamp="2025-05-15 00:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:02:13.221757813 +0000 UTC m=+1.251876629" watchObservedRunningTime="2025-05-15 00:02:13.222480963 +0000 UTC m=+1.252599789" May 15 00:02:14.174905 kubelet[2630]: E0515 00:02:14.174311 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:14.174905 kubelet[2630]: E0515 00:02:14.174803 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:14.885770 sudo[1713]: pam_unix(sudo:session): session closed for user root May 15 00:02:14.939147 sshd[1712]: Connection closed by 139.178.68.195 port 57182 May 15 00:02:14.940267 sshd-session[1710]: pam_unix(sshd:session): session closed for user core May 15 00:02:14.945862 systemd-logind[1475]: Session 7 logged out. Waiting for processes to exit. May 15 00:02:14.947877 systemd[1]: sshd@6-172.237.148.154:22-139.178.68.195:57182.service: Deactivated successfully. May 15 00:02:14.954655 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:02:14.954952 systemd[1]: session-7.scope: Consumed 6.060s CPU time, 261.7M memory peak. May 15 00:02:14.956961 systemd-logind[1475]: Removed session 7. May 15 00:02:15.284936 kubelet[2630]: E0515 00:02:15.284828 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:15.782625 kubelet[2630]: I0515 00:02:15.782549 2630 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:02:15.785330 containerd[1498]: time="2025-05-15T00:02:15.784980325Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:02:15.786287 kubelet[2630]: I0515 00:02:15.785713 2630 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:02:16.182112 kubelet[2630]: E0515 00:02:16.181354 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:16.410552 kubelet[2630]: E0515 00:02:16.409207 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:16.809347 systemd[1]: Created slice kubepods-besteffort-pod509d27a5_6289_4e84_ba44_51d322754df8.slice - libcontainer container kubepods-besteffort-pod509d27a5_6289_4e84_ba44_51d322754df8.slice. May 15 00:02:16.824635 systemd[1]: Created slice kubepods-burstable-pod114ade3f_b18a_4f94_975c_78f9bbcdd956.slice - libcontainer container kubepods-burstable-pod114ade3f_b18a_4f94_975c_78f9bbcdd956.slice. May 15 00:02:16.928058 kubelet[2630]: I0515 00:02:16.927178 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/509d27a5-6289-4e84-ba44-51d322754df8-kube-proxy\") pod \"kube-proxy-bpsvr\" (UID: \"509d27a5-6289-4e84-ba44-51d322754df8\") " pod="kube-system/kube-proxy-bpsvr" May 15 00:02:16.928058 kubelet[2630]: I0515 00:02:16.927243 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-bpf-maps\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.928058 kubelet[2630]: I0515 00:02:16.927273 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-cni-path\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.928058 kubelet[2630]: I0515 00:02:16.927293 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-etc-cni-netd\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.928058 kubelet[2630]: I0515 00:02:16.927321 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-lib-modules\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.928058 kubelet[2630]: I0515 00:02:16.927344 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-xtables-lock\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.928551 kubelet[2630]: I0515 00:02:16.927369 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/509d27a5-6289-4e84-ba44-51d322754df8-lib-modules\") pod \"kube-proxy-bpsvr\" (UID: \"509d27a5-6289-4e84-ba44-51d322754df8\") " pod="kube-system/kube-proxy-bpsvr" May 15 00:02:16.928551 kubelet[2630]: I0515 00:02:16.927391 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q77h7\" (UniqueName: \"kubernetes.io/projected/509d27a5-6289-4e84-ba44-51d322754df8-kube-api-access-q77h7\") pod \"kube-proxy-bpsvr\" (UID: \"509d27a5-6289-4e84-ba44-51d322754df8\") " pod="kube-system/kube-proxy-bpsvr" May 15 00:02:16.928551 kubelet[2630]: I0515 00:02:16.927417 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-cilium-run\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.928551 kubelet[2630]: I0515 00:02:16.927449 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/114ade3f-b18a-4f94-975c-78f9bbcdd956-clustermesh-secrets\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.928551 kubelet[2630]: I0515 00:02:16.927481 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z989\" (UniqueName: \"kubernetes.io/projected/114ade3f-b18a-4f94-975c-78f9bbcdd956-kube-api-access-6z989\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.928687 kubelet[2630]: I0515 00:02:16.927504 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-cilium-cgroup\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.928687 kubelet[2630]: I0515 00:02:16.927525 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/114ade3f-b18a-4f94-975c-78f9bbcdd956-hubble-tls\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.928687 kubelet[2630]: I0515 00:02:16.927543 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/509d27a5-6289-4e84-ba44-51d322754df8-xtables-lock\") pod \"kube-proxy-bpsvr\" (UID: \"509d27a5-6289-4e84-ba44-51d322754df8\") " pod="kube-system/kube-proxy-bpsvr" May 15 00:02:16.928687 kubelet[2630]: I0515 00:02:16.927579 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-host-proc-sys-net\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.928687 kubelet[2630]: I0515 00:02:16.927600 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-host-proc-sys-kernel\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.928687 kubelet[2630]: I0515 00:02:16.927696 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-hostproc\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.928866 kubelet[2630]: I0515 00:02:16.927729 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/114ade3f-b18a-4f94-975c-78f9bbcdd956-cilium-config-path\") pod \"cilium-hcgwv\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " pod="kube-system/cilium-hcgwv" May 15 00:02:16.982832 systemd[1]: Created slice kubepods-besteffort-pod8b8d7a44_30c6_4a33_ac93_eaadcc355d9a.slice - libcontainer container kubepods-besteffort-pod8b8d7a44_30c6_4a33_ac93_eaadcc355d9a.slice. May 15 00:02:17.028325 kubelet[2630]: I0515 00:02:17.028160 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n6pk\" (UniqueName: \"kubernetes.io/projected/8b8d7a44-30c6-4a33-ac93-eaadcc355d9a-kube-api-access-8n6pk\") pod \"cilium-operator-6c4d7847fc-hb45s\" (UID: \"8b8d7a44-30c6-4a33-ac93-eaadcc355d9a\") " pod="kube-system/cilium-operator-6c4d7847fc-hb45s" May 15 00:02:17.028460 kubelet[2630]: I0515 00:02:17.028341 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b8d7a44-30c6-4a33-ac93-eaadcc355d9a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hb45s\" (UID: \"8b8d7a44-30c6-4a33-ac93-eaadcc355d9a\") " pod="kube-system/cilium-operator-6c4d7847fc-hb45s" May 15 00:02:17.123099 kubelet[2630]: E0515 00:02:17.120224 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:17.123659 containerd[1498]: time="2025-05-15T00:02:17.122218516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bpsvr,Uid:509d27a5-6289-4e84-ba44-51d322754df8,Namespace:kube-system,Attempt:0,}" May 15 00:02:17.130746 kubelet[2630]: E0515 00:02:17.130706 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:17.139089 containerd[1498]: time="2025-05-15T00:02:17.134456200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hcgwv,Uid:114ade3f-b18a-4f94-975c-78f9bbcdd956,Namespace:kube-system,Attempt:0,}" May 15 00:02:17.193225 kubelet[2630]: E0515 00:02:17.191779 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:17.251541 containerd[1498]: time="2025-05-15T00:02:17.251107876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:02:17.252684 containerd[1498]: time="2025-05-15T00:02:17.252623997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:02:17.252780 containerd[1498]: time="2025-05-15T00:02:17.252669157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:17.252963 containerd[1498]: time="2025-05-15T00:02:17.252907267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:17.257628 containerd[1498]: time="2025-05-15T00:02:17.257558218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:02:17.257825 containerd[1498]: time="2025-05-15T00:02:17.257796608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:02:17.257947 containerd[1498]: time="2025-05-15T00:02:17.257924318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:17.259065 containerd[1498]: time="2025-05-15T00:02:17.259003459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:17.290443 kubelet[2630]: E0515 00:02:17.289917 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:17.290770 containerd[1498]: time="2025-05-15T00:02:17.290739919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hb45s,Uid:8b8d7a44-30c6-4a33-ac93-eaadcc355d9a,Namespace:kube-system,Attempt:0,}" May 15 00:02:17.313959 systemd[1]: Started cri-containerd-2460896d574ebe62f1881a58da066b67be87589561e516cecf933b2c351417d0.scope - libcontainer container 2460896d574ebe62f1881a58da066b67be87589561e516cecf933b2c351417d0. May 15 00:02:17.469929 systemd[1]: Started cri-containerd-f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985.scope - libcontainer container f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985. May 15 00:02:17.539114 containerd[1498]: time="2025-05-15T00:02:17.537621367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:02:17.539114 containerd[1498]: time="2025-05-15T00:02:17.537694977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:02:17.539114 containerd[1498]: time="2025-05-15T00:02:17.537707377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:17.540512 containerd[1498]: time="2025-05-15T00:02:17.540459448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:17.547215 containerd[1498]: time="2025-05-15T00:02:17.547153509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hcgwv,Uid:114ade3f-b18a-4f94-975c-78f9bbcdd956,Namespace:kube-system,Attempt:0,} returns sandbox id \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\"" May 15 00:02:17.550101 kubelet[2630]: E0515 00:02:17.549237 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:17.552876 containerd[1498]: time="2025-05-15T00:02:17.552811761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bpsvr,Uid:509d27a5-6289-4e84-ba44-51d322754df8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2460896d574ebe62f1881a58da066b67be87589561e516cecf933b2c351417d0\"" May 15 00:02:17.555429 kubelet[2630]: E0515 00:02:17.555396 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:17.557713 containerd[1498]: time="2025-05-15T00:02:17.557614232Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 00:02:17.565064 containerd[1498]: time="2025-05-15T00:02:17.563950754Z" level=info msg="CreateContainer within sandbox \"2460896d574ebe62f1881a58da066b67be87589561e516cecf933b2c351417d0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:02:17.598280 containerd[1498]: time="2025-05-15T00:02:17.598247403Z" level=info msg="CreateContainer within sandbox \"2460896d574ebe62f1881a58da066b67be87589561e516cecf933b2c351417d0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5a7419e44f1523101673e5d0f2670acd42a7eeb73aca3cf80aaa002ad8f78011\"" May 15 00:02:17.601387 containerd[1498]: time="2025-05-15T00:02:17.601360554Z" level=info msg="StartContainer for \"5a7419e44f1523101673e5d0f2670acd42a7eeb73aca3cf80aaa002ad8f78011\"" May 15 00:02:17.610072 systemd[1]: Started cri-containerd-05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af.scope - libcontainer container 05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af. May 15 00:02:17.682188 containerd[1498]: time="2025-05-15T00:02:17.682128619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hb45s,Uid:8b8d7a44-30c6-4a33-ac93-eaadcc355d9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\"" May 15 00:02:17.683320 kubelet[2630]: E0515 00:02:17.683273 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:17.696189 systemd[1]: Started cri-containerd-5a7419e44f1523101673e5d0f2670acd42a7eeb73aca3cf80aaa002ad8f78011.scope - libcontainer container 5a7419e44f1523101673e5d0f2670acd42a7eeb73aca3cf80aaa002ad8f78011. May 15 00:02:17.733209 containerd[1498]: time="2025-05-15T00:02:17.732517915Z" level=info msg="StartContainer for \"5a7419e44f1523101673e5d0f2670acd42a7eeb73aca3cf80aaa002ad8f78011\" returns successfully" May 15 00:02:18.193967 kubelet[2630]: E0515 00:02:18.193845 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:18.209559 kubelet[2630]: I0515 00:02:18.209466 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bpsvr" podStartSLOduration=2.209426217 podStartE2EDuration="2.209426217s" podCreationTimestamp="2025-05-15 00:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:02:18.206314496 +0000 UTC m=+6.236433312" watchObservedRunningTime="2025-05-15 00:02:18.209426217 +0000 UTC m=+6.239545043" May 15 00:02:20.345068 update_engine[1476]: I20250515 00:02:20.344254 1476 update_attempter.cc:509] Updating boot flags... May 15 00:02:20.617047 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2998) May 15 00:02:20.806065 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2997) May 15 00:02:22.236589 kubelet[2630]: E0515 00:02:22.236536 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:23.206802 kubelet[2630]: E0515 00:02:23.206721 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:24.214718 kubelet[2630]: E0515 00:02:24.213269 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:25.469903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1996985868.mount: Deactivated successfully. May 15 00:02:28.706796 containerd[1498]: time="2025-05-15T00:02:28.706339617Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:02:28.707572 containerd[1498]: time="2025-05-15T00:02:28.707535167Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 15 00:02:28.708058 containerd[1498]: time="2025-05-15T00:02:28.708009517Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:02:28.710061 containerd[1498]: time="2025-05-15T00:02:28.709587967Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.151917985s" May 15 00:02:28.710061 containerd[1498]: time="2025-05-15T00:02:28.709647447Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 00:02:28.712447 containerd[1498]: time="2025-05-15T00:02:28.712424708Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 00:02:28.715776 containerd[1498]: time="2025-05-15T00:02:28.715752988Z" level=info msg="CreateContainer within sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:02:28.731227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3699401436.mount: Deactivated successfully. May 15 00:02:28.735500 containerd[1498]: time="2025-05-15T00:02:28.735468631Z" level=info msg="CreateContainer within sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba\"" May 15 00:02:28.737067 containerd[1498]: time="2025-05-15T00:02:28.736050231Z" level=info msg="StartContainer for \"3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba\"" May 15 00:02:28.835652 systemd[1]: Started cri-containerd-3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba.scope - libcontainer container 3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba. May 15 00:02:28.872741 containerd[1498]: time="2025-05-15T00:02:28.872697122Z" level=info msg="StartContainer for \"3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba\" returns successfully" May 15 00:02:28.892758 systemd[1]: cri-containerd-3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba.scope: Deactivated successfully. May 15 00:02:28.958583 containerd[1498]: time="2025-05-15T00:02:28.958413085Z" level=info msg="shim disconnected" id=3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba namespace=k8s.io May 15 00:02:28.958583 containerd[1498]: time="2025-05-15T00:02:28.958498445Z" level=warning msg="cleaning up after shim disconnected" id=3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba namespace=k8s.io May 15 00:02:28.958583 containerd[1498]: time="2025-05-15T00:02:28.958510995Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:02:29.303895 kubelet[2630]: E0515 00:02:29.303747 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:29.306847 containerd[1498]: time="2025-05-15T00:02:29.306771545Z" level=info msg="CreateContainer within sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:02:29.321303 containerd[1498]: time="2025-05-15T00:02:29.321235787Z" level=info msg="CreateContainer within sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148\"" May 15 00:02:29.324688 containerd[1498]: time="2025-05-15T00:02:29.324627337Z" level=info msg="StartContainer for \"b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148\"" May 15 00:02:29.375188 systemd[1]: Started cri-containerd-b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148.scope - libcontainer container b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148. May 15 00:02:29.408155 containerd[1498]: time="2025-05-15T00:02:29.407673709Z" level=info msg="StartContainer for \"b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148\" returns successfully" May 15 00:02:29.430211 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:02:29.431056 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:02:29.431657 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 00:02:29.437952 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:02:29.438436 systemd[1]: cri-containerd-b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148.scope: Deactivated successfully. May 15 00:02:29.483702 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:02:29.486648 containerd[1498]: time="2025-05-15T00:02:29.486569200Z" level=info msg="shim disconnected" id=b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148 namespace=k8s.io May 15 00:02:29.486648 containerd[1498]: time="2025-05-15T00:02:29.486614660Z" level=warning msg="cleaning up after shim disconnected" id=b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148 namespace=k8s.io May 15 00:02:29.486648 containerd[1498]: time="2025-05-15T00:02:29.486622780Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:02:29.727352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba-rootfs.mount: Deactivated successfully. May 15 00:02:30.316694 kubelet[2630]: E0515 00:02:30.316622 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:30.328562 containerd[1498]: time="2025-05-15T00:02:30.328433689Z" level=info msg="CreateContainer within sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:02:30.368744 containerd[1498]: time="2025-05-15T00:02:30.368700194Z" level=info msg="CreateContainer within sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb\"" May 15 00:02:30.375137 containerd[1498]: time="2025-05-15T00:02:30.375056975Z" level=info msg="StartContainer for \"31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb\"" May 15 00:02:30.591024 containerd[1498]: time="2025-05-15T00:02:30.590891884Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:02:30.592485 containerd[1498]: time="2025-05-15T00:02:30.591991894Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 15 00:02:30.592317 systemd[1]: Started cri-containerd-31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb.scope - libcontainer container 31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb. May 15 00:02:30.594215 containerd[1498]: time="2025-05-15T00:02:30.593469994Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:02:30.599594 containerd[1498]: time="2025-05-15T00:02:30.598113575Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.885553237s" May 15 00:02:30.599594 containerd[1498]: time="2025-05-15T00:02:30.598177365Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 00:02:30.608089 containerd[1498]: time="2025-05-15T00:02:30.607919986Z" level=info msg="CreateContainer within sandbox \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 00:02:30.621154 containerd[1498]: time="2025-05-15T00:02:30.621124498Z" level=info msg="CreateContainer within sandbox \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\"" May 15 00:02:30.621657 containerd[1498]: time="2025-05-15T00:02:30.621616838Z" level=info msg="StartContainer for \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\"" May 15 00:02:30.667709 containerd[1498]: time="2025-05-15T00:02:30.667429134Z" level=info msg="StartContainer for \"31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb\" returns successfully" May 15 00:02:30.673961 systemd[1]: cri-containerd-31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb.scope: Deactivated successfully. May 15 00:02:30.690199 systemd[1]: Started cri-containerd-d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f.scope - libcontainer container d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f. May 15 00:02:30.730928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb-rootfs.mount: Deactivated successfully. May 15 00:02:30.745934 containerd[1498]: time="2025-05-15T00:02:30.745564034Z" level=info msg="shim disconnected" id=31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb namespace=k8s.io May 15 00:02:30.745934 containerd[1498]: time="2025-05-15T00:02:30.745735674Z" level=warning msg="cleaning up after shim disconnected" id=31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb namespace=k8s.io May 15 00:02:30.745934 containerd[1498]: time="2025-05-15T00:02:30.745757514Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:02:30.764533 containerd[1498]: time="2025-05-15T00:02:30.764491187Z" level=info msg="StartContainer for \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\" returns successfully" May 15 00:02:30.780076 containerd[1498]: time="2025-05-15T00:02:30.778143949Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:02:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 00:02:31.322042 kubelet[2630]: E0515 00:02:31.321732 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:31.327047 kubelet[2630]: E0515 00:02:31.324754 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:31.329447 containerd[1498]: time="2025-05-15T00:02:31.329188571Z" level=info msg="CreateContainer within sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:02:31.349953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2489079286.mount: Deactivated successfully. May 15 00:02:31.365638 containerd[1498]: time="2025-05-15T00:02:31.365524465Z" level=info msg="CreateContainer within sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb\"" May 15 00:02:31.369114 containerd[1498]: time="2025-05-15T00:02:31.368181905Z" level=info msg="StartContainer for \"e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb\"" May 15 00:02:31.447328 systemd[1]: Started cri-containerd-e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb.scope - libcontainer container e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb. May 15 00:02:31.666521 containerd[1498]: time="2025-05-15T00:02:31.665074252Z" level=info msg="StartContainer for \"e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb\" returns successfully" May 15 00:02:31.667058 kubelet[2630]: I0515 00:02:31.666914 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hb45s" podStartSLOduration=2.754999147 podStartE2EDuration="15.666845932s" podCreationTimestamp="2025-05-15 00:02:16 +0000 UTC" firstStartedPulling="2025-05-15 00:02:17.68713956 +0000 UTC m=+5.717258386" lastFinishedPulling="2025-05-15 00:02:30.598986355 +0000 UTC m=+18.629105171" observedRunningTime="2025-05-15 00:02:31.429225983 +0000 UTC m=+19.459344799" watchObservedRunningTime="2025-05-15 00:02:31.666845932 +0000 UTC m=+19.696964748" May 15 00:02:31.670176 systemd[1]: cri-containerd-e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb.scope: Deactivated successfully. May 15 00:02:31.707454 containerd[1498]: time="2025-05-15T00:02:31.707192957Z" level=info msg="shim disconnected" id=e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb namespace=k8s.io May 15 00:02:31.707454 containerd[1498]: time="2025-05-15T00:02:31.707280567Z" level=warning msg="cleaning up after shim disconnected" id=e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb namespace=k8s.io May 15 00:02:31.707454 containerd[1498]: time="2025-05-15T00:02:31.707295697Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:02:31.728853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb-rootfs.mount: Deactivated successfully. May 15 00:02:32.329054 kubelet[2630]: E0515 00:02:32.329010 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:32.330223 kubelet[2630]: E0515 00:02:32.330170 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:32.333337 containerd[1498]: time="2025-05-15T00:02:32.332823592Z" level=info msg="CreateContainer within sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:02:32.355223 containerd[1498]: time="2025-05-15T00:02:32.353578775Z" level=info msg="CreateContainer within sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\"" May 15 00:02:32.355223 containerd[1498]: time="2025-05-15T00:02:32.355170285Z" level=info msg="StartContainer for \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\"" May 15 00:02:32.427168 systemd[1]: Started cri-containerd-e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a.scope - libcontainer container e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a. May 15 00:02:32.538776 containerd[1498]: time="2025-05-15T00:02:32.538657897Z" level=info msg="StartContainer for \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\" returns successfully" May 15 00:02:32.887559 systemd[1]: run-containerd-runc-k8s.io-e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a-runc.8SFcPO.mount: Deactivated successfully. May 15 00:02:32.992171 kubelet[2630]: I0515 00:02:32.992104 2630 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 00:02:33.050491 systemd[1]: Created slice kubepods-burstable-pod0f58d263_bf5c_4f84_8ccf_9304975e78cf.slice - libcontainer container kubepods-burstable-pod0f58d263_bf5c_4f84_8ccf_9304975e78cf.slice. May 15 00:02:33.061754 systemd[1]: Created slice kubepods-burstable-podbf7efd49_21f4_44e8_91fe_b1571871c3c4.slice - libcontainer container kubepods-burstable-podbf7efd49_21f4_44e8_91fe_b1571871c3c4.slice. May 15 00:02:33.080765 kubelet[2630]: I0515 00:02:33.080728 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf7efd49-21f4-44e8-91fe-b1571871c3c4-config-volume\") pod \"coredns-668d6bf9bc-wr222\" (UID: \"bf7efd49-21f4-44e8-91fe-b1571871c3c4\") " pod="kube-system/coredns-668d6bf9bc-wr222" May 15 00:02:33.081123 kubelet[2630]: I0515 00:02:33.080968 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f58d263-bf5c-4f84-8ccf-9304975e78cf-config-volume\") pod \"coredns-668d6bf9bc-rc7ct\" (UID: \"0f58d263-bf5c-4f84-8ccf-9304975e78cf\") " pod="kube-system/coredns-668d6bf9bc-rc7ct" May 15 00:02:33.081123 kubelet[2630]: I0515 00:02:33.080997 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22jrl\" (UniqueName: \"kubernetes.io/projected/0f58d263-bf5c-4f84-8ccf-9304975e78cf-kube-api-access-22jrl\") pod \"coredns-668d6bf9bc-rc7ct\" (UID: \"0f58d263-bf5c-4f84-8ccf-9304975e78cf\") " pod="kube-system/coredns-668d6bf9bc-rc7ct" May 15 00:02:33.081123 kubelet[2630]: I0515 00:02:33.081055 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbcv5\" (UniqueName: \"kubernetes.io/projected/bf7efd49-21f4-44e8-91fe-b1571871c3c4-kube-api-access-fbcv5\") pod \"coredns-668d6bf9bc-wr222\" (UID: \"bf7efd49-21f4-44e8-91fe-b1571871c3c4\") " pod="kube-system/coredns-668d6bf9bc-wr222" May 15 00:02:33.337232 kubelet[2630]: E0515 00:02:33.337197 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:33.358547 kubelet[2630]: E0515 00:02:33.358526 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:33.361220 containerd[1498]: time="2025-05-15T00:02:33.360239930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rc7ct,Uid:0f58d263-bf5c-4f84-8ccf-9304975e78cf,Namespace:kube-system,Attempt:0,}" May 15 00:02:33.366086 kubelet[2630]: E0515 00:02:33.364706 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:33.366447 containerd[1498]: time="2025-05-15T00:02:33.366397051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr222,Uid:bf7efd49-21f4-44e8-91fe-b1571871c3c4,Namespace:kube-system,Attempt:0,}" May 15 00:02:33.371088 kubelet[2630]: I0515 00:02:33.369380 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hcgwv" podStartSLOduration=6.211633225 podStartE2EDuration="17.369367021s" podCreationTimestamp="2025-05-15 00:02:16 +0000 UTC" firstStartedPulling="2025-05-15 00:02:17.553481101 +0000 UTC m=+5.583599917" lastFinishedPulling="2025-05-15 00:02:28.711214887 +0000 UTC m=+16.741333713" observedRunningTime="2025-05-15 00:02:33.365744801 +0000 UTC m=+21.395863617" watchObservedRunningTime="2025-05-15 00:02:33.369367021 +0000 UTC m=+21.399485837" May 15 00:02:34.338284 kubelet[2630]: E0515 00:02:34.338240 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:35.340609 kubelet[2630]: E0515 00:02:35.340573 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:35.465374 systemd-networkd[1386]: cilium_host: Link UP May 15 00:02:35.468108 systemd-networkd[1386]: cilium_net: Link UP May 15 00:02:35.468364 systemd-networkd[1386]: cilium_net: Gained carrier May 15 00:02:35.468561 systemd-networkd[1386]: cilium_host: Gained carrier May 15 00:02:35.468775 systemd-networkd[1386]: cilium_net: Gained IPv6LL May 15 00:02:35.605201 systemd-networkd[1386]: cilium_vxlan: Link UP May 15 00:02:35.605215 systemd-networkd[1386]: cilium_vxlan: Gained carrier May 15 00:02:35.792228 systemd-networkd[1386]: cilium_host: Gained IPv6LL May 15 00:02:36.058070 kernel: NET: Registered PF_ALG protocol family May 15 00:02:36.343729 kubelet[2630]: E0515 00:02:36.343104 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:36.844698 systemd-networkd[1386]: lxc_health: Link UP May 15 00:02:36.866836 systemd-networkd[1386]: lxc_health: Gained carrier May 15 00:02:37.284302 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL May 15 00:02:37.352315 kubelet[2630]: E0515 00:02:37.352070 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:37.499504 systemd-networkd[1386]: lxcca73f6f9d287: Link UP May 15 00:02:37.514092 kernel: eth0: renamed from tmp78685 May 15 00:02:37.520981 systemd-networkd[1386]: lxcca73f6f9d287: Gained carrier May 15 00:02:37.525302 systemd-networkd[1386]: lxc708d20430a03: Link UP May 15 00:02:37.532070 kernel: eth0: renamed from tmp65939 May 15 00:02:37.539673 systemd-networkd[1386]: lxc708d20430a03: Gained carrier May 15 00:02:38.352293 kubelet[2630]: E0515 00:02:38.350371 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:38.436257 systemd-networkd[1386]: lxc_health: Gained IPv6LL May 15 00:02:38.816395 systemd-networkd[1386]: lxc708d20430a03: Gained IPv6LL May 15 00:02:39.584545 systemd-networkd[1386]: lxcca73f6f9d287: Gained IPv6LL May 15 00:02:41.389541 containerd[1498]: time="2025-05-15T00:02:41.389248732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:02:41.390861 containerd[1498]: time="2025-05-15T00:02:41.390795872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:02:41.391818 containerd[1498]: time="2025-05-15T00:02:41.391781813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:02:41.391933 containerd[1498]: time="2025-05-15T00:02:41.391898793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:41.392452 containerd[1498]: time="2025-05-15T00:02:41.392177043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:02:41.392452 containerd[1498]: time="2025-05-15T00:02:41.392206043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:41.392452 containerd[1498]: time="2025-05-15T00:02:41.392330503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:41.392916 containerd[1498]: time="2025-05-15T00:02:41.392729943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:02:41.470195 systemd[1]: Started cri-containerd-65939255b16f1d661664bac64b1b8eafc120f61cad555c80106a6dfe442e4c8e.scope - libcontainer container 65939255b16f1d661664bac64b1b8eafc120f61cad555c80106a6dfe442e4c8e. May 15 00:02:41.478287 systemd[1]: Started cri-containerd-7868571ad97ca839eae98f88b9b2b26783970c25fef2c55294820cb9c174f012.scope - libcontainer container 7868571ad97ca839eae98f88b9b2b26783970c25fef2c55294820cb9c174f012. May 15 00:02:41.573978 containerd[1498]: time="2025-05-15T00:02:41.573804835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rc7ct,Uid:0f58d263-bf5c-4f84-8ccf-9304975e78cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"65939255b16f1d661664bac64b1b8eafc120f61cad555c80106a6dfe442e4c8e\"" May 15 00:02:41.575322 kubelet[2630]: E0515 00:02:41.575285 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:41.583569 containerd[1498]: time="2025-05-15T00:02:41.582100545Z" level=info msg="CreateContainer within sandbox \"65939255b16f1d661664bac64b1b8eafc120f61cad555c80106a6dfe442e4c8e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:02:41.620942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount511834179.mount: Deactivated successfully. May 15 00:02:41.628754 containerd[1498]: time="2025-05-15T00:02:41.628687088Z" level=info msg="CreateContainer within sandbox \"65939255b16f1d661664bac64b1b8eafc120f61cad555c80106a6dfe442e4c8e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93205513f10a507b0ab07eee599af0be3443404b9e4eb4ce2e4b0152b3f53e7a\"" May 15 00:02:41.630388 containerd[1498]: time="2025-05-15T00:02:41.630361508Z" level=info msg="StartContainer for \"93205513f10a507b0ab07eee599af0be3443404b9e4eb4ce2e4b0152b3f53e7a\"" May 15 00:02:41.696606 systemd[1]: Started cri-containerd-93205513f10a507b0ab07eee599af0be3443404b9e4eb4ce2e4b0152b3f53e7a.scope - libcontainer container 93205513f10a507b0ab07eee599af0be3443404b9e4eb4ce2e4b0152b3f53e7a. May 15 00:02:41.722904 containerd[1498]: time="2025-05-15T00:02:41.722745224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr222,Uid:bf7efd49-21f4-44e8-91fe-b1571871c3c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7868571ad97ca839eae98f88b9b2b26783970c25fef2c55294820cb9c174f012\"" May 15 00:02:41.725650 kubelet[2630]: E0515 00:02:41.725434 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:41.733514 containerd[1498]: time="2025-05-15T00:02:41.733480375Z" level=info msg="CreateContainer within sandbox \"7868571ad97ca839eae98f88b9b2b26783970c25fef2c55294820cb9c174f012\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:02:41.758861 containerd[1498]: time="2025-05-15T00:02:41.758797457Z" level=info msg="CreateContainer within sandbox \"7868571ad97ca839eae98f88b9b2b26783970c25fef2c55294820cb9c174f012\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6dc636a14baff5991495b1c2b89e21b48423c2621e204d6b7320c9f86787a5a7\"" May 15 00:02:41.760539 containerd[1498]: time="2025-05-15T00:02:41.760466477Z" level=info msg="StartContainer for \"6dc636a14baff5991495b1c2b89e21b48423c2621e204d6b7320c9f86787a5a7\"" May 15 00:02:41.767081 containerd[1498]: time="2025-05-15T00:02:41.767044987Z" level=info msg="StartContainer for \"93205513f10a507b0ab07eee599af0be3443404b9e4eb4ce2e4b0152b3f53e7a\" returns successfully" May 15 00:02:41.804218 systemd[1]: Started cri-containerd-6dc636a14baff5991495b1c2b89e21b48423c2621e204d6b7320c9f86787a5a7.scope - libcontainer container 6dc636a14baff5991495b1c2b89e21b48423c2621e204d6b7320c9f86787a5a7. May 15 00:02:41.878047 containerd[1498]: time="2025-05-15T00:02:41.877961225Z" level=info msg="StartContainer for \"6dc636a14baff5991495b1c2b89e21b48423c2621e204d6b7320c9f86787a5a7\" returns successfully" May 15 00:02:42.374729 kubelet[2630]: E0515 00:02:42.374682 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:42.378212 kubelet[2630]: E0515 00:02:42.378169 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:42.396469 kubelet[2630]: I0515 00:02:42.396316 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wr222" podStartSLOduration=26.396236097 podStartE2EDuration="26.396236097s" podCreationTimestamp="2025-05-15 00:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:02:42.393798737 +0000 UTC m=+30.423917553" watchObservedRunningTime="2025-05-15 00:02:42.396236097 +0000 UTC m=+30.426354913" May 15 00:02:42.407741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3994655570.mount: Deactivated successfully. May 15 00:02:42.424447 kubelet[2630]: I0515 00:02:42.424390 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rc7ct" podStartSLOduration=26.424370809 podStartE2EDuration="26.424370809s" podCreationTimestamp="2025-05-15 00:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:02:42.424172059 +0000 UTC m=+30.454290875" watchObservedRunningTime="2025-05-15 00:02:42.424370809 +0000 UTC m=+30.454489625" May 15 00:02:43.380197 kubelet[2630]: E0515 00:02:43.380104 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:43.380197 kubelet[2630]: E0515 00:02:43.380104 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:44.381283 kubelet[2630]: E0515 00:02:44.381191 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:02:44.381283 kubelet[2630]: E0515 00:02:44.381215 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:03:37.135835 kubelet[2630]: E0515 00:03:37.135589 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:03:39.136080 kubelet[2630]: E0515 00:03:39.135709 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:03:43.135298 kubelet[2630]: E0515 00:03:43.135238 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:03:44.135241 kubelet[2630]: E0515 00:03:44.134812 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:03:48.135441 kubelet[2630]: E0515 00:03:48.134733 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:03:50.135698 kubelet[2630]: E0515 00:03:50.134687 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:04:00.136562 kubelet[2630]: E0515 00:04:00.135841 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:04:04.136906 kubelet[2630]: E0515 00:04:04.135571 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:04:08.936671 systemd[1]: Started sshd@7-172.237.148.154:22-139.178.68.195:57286.service - OpenSSH per-connection server daemon (139.178.68.195:57286). May 15 00:04:09.277309 sshd[4026]: Accepted publickey for core from 139.178.68.195 port 57286 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:09.279357 sshd-session[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:09.294157 systemd-logind[1475]: New session 8 of user core. May 15 00:04:09.308223 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:04:09.713415 sshd[4028]: Connection closed by 139.178.68.195 port 57286 May 15 00:04:09.714380 sshd-session[4026]: pam_unix(sshd:session): session closed for user core May 15 00:04:09.723988 systemd[1]: sshd@7-172.237.148.154:22-139.178.68.195:57286.service: Deactivated successfully. May 15 00:04:09.731470 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:04:09.732491 systemd-logind[1475]: Session 8 logged out. Waiting for processes to exit. May 15 00:04:09.734060 systemd-logind[1475]: Removed session 8. May 15 00:04:14.782312 systemd[1]: Started sshd@8-172.237.148.154:22-139.178.68.195:54196.service - OpenSSH per-connection server daemon (139.178.68.195:54196). May 15 00:04:15.133762 sshd[4043]: Accepted publickey for core from 139.178.68.195 port 54196 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:15.136113 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:15.141818 systemd-logind[1475]: New session 9 of user core. May 15 00:04:15.154191 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:04:15.479362 sshd[4045]: Connection closed by 139.178.68.195 port 54196 May 15 00:04:15.480383 sshd-session[4043]: pam_unix(sshd:session): session closed for user core May 15 00:04:15.485787 systemd-logind[1475]: Session 9 logged out. Waiting for processes to exit. May 15 00:04:15.486667 systemd[1]: sshd@8-172.237.148.154:22-139.178.68.195:54196.service: Deactivated successfully. May 15 00:04:15.490555 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:04:15.492521 systemd-logind[1475]: Removed session 9. May 15 00:04:20.545265 systemd[1]: Started sshd@9-172.237.148.154:22-139.178.68.195:54198.service - OpenSSH per-connection server daemon (139.178.68.195:54198). May 15 00:04:20.867862 sshd[4060]: Accepted publickey for core from 139.178.68.195 port 54198 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:20.869464 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:20.874463 systemd-logind[1475]: New session 10 of user core. May 15 00:04:20.881185 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:04:21.195182 sshd[4062]: Connection closed by 139.178.68.195 port 54198 May 15 00:04:21.194983 sshd-session[4060]: pam_unix(sshd:session): session closed for user core May 15 00:04:21.200347 systemd-logind[1475]: Session 10 logged out. Waiting for processes to exit. May 15 00:04:21.201210 systemd[1]: sshd@9-172.237.148.154:22-139.178.68.195:54198.service: Deactivated successfully. May 15 00:04:21.203804 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:04:21.205080 systemd-logind[1475]: Removed session 10. May 15 00:04:26.265272 systemd[1]: Started sshd@10-172.237.148.154:22-139.178.68.195:52668.service - OpenSSH per-connection server daemon (139.178.68.195:52668). May 15 00:04:26.632241 sshd[4075]: Accepted publickey for core from 139.178.68.195 port 52668 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:26.632945 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:26.639316 systemd-logind[1475]: New session 11 of user core. May 15 00:04:26.647188 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:04:27.033595 sshd[4077]: Connection closed by 139.178.68.195 port 52668 May 15 00:04:27.035390 sshd-session[4075]: pam_unix(sshd:session): session closed for user core May 15 00:04:27.041572 systemd-logind[1475]: Session 11 logged out. Waiting for processes to exit. May 15 00:04:27.042888 systemd[1]: sshd@10-172.237.148.154:22-139.178.68.195:52668.service: Deactivated successfully. May 15 00:04:27.046475 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:04:27.048170 systemd-logind[1475]: Removed session 11. May 15 00:04:27.103253 systemd[1]: Started sshd@11-172.237.148.154:22-139.178.68.195:52680.service - OpenSSH per-connection server daemon (139.178.68.195:52680). May 15 00:04:27.452850 sshd[4089]: Accepted publickey for core from 139.178.68.195 port 52680 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:27.455490 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:27.461261 systemd-logind[1475]: New session 12 of user core. May 15 00:04:27.468171 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:04:27.846128 sshd[4091]: Connection closed by 139.178.68.195 port 52680 May 15 00:04:27.847246 sshd-session[4089]: pam_unix(sshd:session): session closed for user core May 15 00:04:27.852747 systemd-logind[1475]: Session 12 logged out. Waiting for processes to exit. May 15 00:04:27.853890 systemd[1]: sshd@11-172.237.148.154:22-139.178.68.195:52680.service: Deactivated successfully. May 15 00:04:27.857122 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:04:27.858793 systemd-logind[1475]: Removed session 12. May 15 00:04:27.906265 systemd[1]: Started sshd@12-172.237.148.154:22-139.178.68.195:52686.service - OpenSSH per-connection server daemon (139.178.68.195:52686). May 15 00:04:28.235532 sshd[4101]: Accepted publickey for core from 139.178.68.195 port 52686 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:28.237710 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:28.243387 systemd-logind[1475]: New session 13 of user core. May 15 00:04:28.251183 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:04:28.547356 sshd[4103]: Connection closed by 139.178.68.195 port 52686 May 15 00:04:28.547873 sshd-session[4101]: pam_unix(sshd:session): session closed for user core May 15 00:04:28.552462 systemd-logind[1475]: Session 13 logged out. Waiting for processes to exit. May 15 00:04:28.553615 systemd[1]: sshd@12-172.237.148.154:22-139.178.68.195:52686.service: Deactivated successfully. May 15 00:04:28.556376 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:04:28.557613 systemd-logind[1475]: Removed session 13. May 15 00:04:33.619261 systemd[1]: Started sshd@13-172.237.148.154:22-139.178.68.195:53916.service - OpenSSH per-connection server daemon (139.178.68.195:53916). May 15 00:04:33.955375 sshd[4115]: Accepted publickey for core from 139.178.68.195 port 53916 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:33.956048 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:33.964192 systemd-logind[1475]: New session 14 of user core. May 15 00:04:33.974178 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:04:34.274687 sshd[4117]: Connection closed by 139.178.68.195 port 53916 May 15 00:04:34.276260 sshd-session[4115]: pam_unix(sshd:session): session closed for user core May 15 00:04:34.280298 systemd-logind[1475]: Session 14 logged out. Waiting for processes to exit. May 15 00:04:34.280857 systemd[1]: sshd@13-172.237.148.154:22-139.178.68.195:53916.service: Deactivated successfully. May 15 00:04:34.283020 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:04:34.284252 systemd-logind[1475]: Removed session 14. May 15 00:04:39.360366 systemd[1]: Started sshd@14-172.237.148.154:22-139.178.68.195:53920.service - OpenSSH per-connection server daemon (139.178.68.195:53920). May 15 00:04:39.822350 sshd[4131]: Accepted publickey for core from 139.178.68.195 port 53920 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:39.823290 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:39.834011 systemd-logind[1475]: New session 15 of user core. May 15 00:04:39.842296 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:04:40.230526 sshd[4133]: Connection closed by 139.178.68.195 port 53920 May 15 00:04:40.232365 sshd-session[4131]: pam_unix(sshd:session): session closed for user core May 15 00:04:40.237775 systemd[1]: sshd@14-172.237.148.154:22-139.178.68.195:53920.service: Deactivated successfully. May 15 00:04:40.240933 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:04:40.242888 systemd-logind[1475]: Session 15 logged out. Waiting for processes to exit. May 15 00:04:40.244281 systemd-logind[1475]: Removed session 15. May 15 00:04:40.296437 systemd[1]: Started sshd@15-172.237.148.154:22-139.178.68.195:53930.service - OpenSSH per-connection server daemon (139.178.68.195:53930). May 15 00:04:40.675855 sshd[4145]: Accepted publickey for core from 139.178.68.195 port 53930 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:40.677824 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:40.684454 systemd-logind[1475]: New session 16 of user core. May 15 00:04:40.691207 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:04:41.044563 sshd[4147]: Connection closed by 139.178.68.195 port 53930 May 15 00:04:41.046272 sshd-session[4145]: pam_unix(sshd:session): session closed for user core May 15 00:04:41.051073 systemd[1]: sshd@15-172.237.148.154:22-139.178.68.195:53930.service: Deactivated successfully. May 15 00:04:41.053635 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:04:41.054702 systemd-logind[1475]: Session 16 logged out. Waiting for processes to exit. May 15 00:04:41.055744 systemd-logind[1475]: Removed session 16. May 15 00:04:41.113468 systemd[1]: Started sshd@16-172.237.148.154:22-139.178.68.195:53940.service - OpenSSH per-connection server daemon (139.178.68.195:53940). May 15 00:04:41.453959 sshd[4157]: Accepted publickey for core from 139.178.68.195 port 53940 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:41.454615 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:41.460317 systemd-logind[1475]: New session 17 of user core. May 15 00:04:41.465247 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:04:42.543064 sshd[4160]: Connection closed by 139.178.68.195 port 53940 May 15 00:04:42.544265 sshd-session[4157]: pam_unix(sshd:session): session closed for user core May 15 00:04:42.549425 systemd-logind[1475]: Session 17 logged out. Waiting for processes to exit. May 15 00:04:42.549883 systemd[1]: sshd@16-172.237.148.154:22-139.178.68.195:53940.service: Deactivated successfully. May 15 00:04:42.553384 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:04:42.556141 systemd-logind[1475]: Removed session 17. May 15 00:04:42.613618 systemd[1]: Started sshd@17-172.237.148.154:22-139.178.68.195:53948.service - OpenSSH per-connection server daemon (139.178.68.195:53948). May 15 00:04:42.970330 sshd[4177]: Accepted publickey for core from 139.178.68.195 port 53948 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:42.972793 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:42.980011 systemd-logind[1475]: New session 18 of user core. May 15 00:04:42.986238 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:04:43.414246 sshd[4179]: Connection closed by 139.178.68.195 port 53948 May 15 00:04:43.417482 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 15 00:04:43.421018 systemd-logind[1475]: Session 18 logged out. Waiting for processes to exit. May 15 00:04:43.423756 systemd[1]: sshd@17-172.237.148.154:22-139.178.68.195:53948.service: Deactivated successfully. May 15 00:04:43.428217 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:04:43.431576 systemd-logind[1475]: Removed session 18. May 15 00:04:43.488491 systemd[1]: Started sshd@18-172.237.148.154:22-139.178.68.195:53958.service - OpenSSH per-connection server daemon (139.178.68.195:53958). May 15 00:04:43.829928 sshd[4189]: Accepted publickey for core from 139.178.68.195 port 53958 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:43.831552 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:43.836533 systemd-logind[1475]: New session 19 of user core. May 15 00:04:43.841160 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:04:44.136145 kubelet[2630]: E0515 00:04:44.135901 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:04:44.137503 kubelet[2630]: E0515 00:04:44.137322 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:04:44.141247 sshd[4191]: Connection closed by 139.178.68.195 port 53958 May 15 00:04:44.142128 sshd-session[4189]: pam_unix(sshd:session): session closed for user core May 15 00:04:44.149765 systemd-logind[1475]: Session 19 logged out. Waiting for processes to exit. May 15 00:04:44.150620 systemd[1]: sshd@18-172.237.148.154:22-139.178.68.195:53958.service: Deactivated successfully. May 15 00:04:44.153222 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:04:44.154260 systemd-logind[1475]: Removed session 19. May 15 00:04:48.137551 kubelet[2630]: E0515 00:04:48.137512 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:04:49.229384 systemd[1]: Started sshd@19-172.237.148.154:22-139.178.68.195:60454.service - OpenSSH per-connection server daemon (139.178.68.195:60454). May 15 00:04:49.592960 sshd[4205]: Accepted publickey for core from 139.178.68.195 port 60454 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:49.594361 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:49.599171 systemd-logind[1475]: New session 20 of user core. May 15 00:04:49.610203 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:04:49.917958 sshd[4207]: Connection closed by 139.178.68.195 port 60454 May 15 00:04:49.919565 sshd-session[4205]: pam_unix(sshd:session): session closed for user core May 15 00:04:49.923927 systemd-logind[1475]: Session 20 logged out. Waiting for processes to exit. May 15 00:04:49.924792 systemd[1]: sshd@19-172.237.148.154:22-139.178.68.195:60454.service: Deactivated successfully. May 15 00:04:49.927374 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:04:49.928519 systemd-logind[1475]: Removed session 20. May 15 00:04:50.135705 kubelet[2630]: E0515 00:04:50.135016 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:04:54.995249 systemd[1]: Started sshd@20-172.237.148.154:22-139.178.68.195:55656.service - OpenSSH per-connection server daemon (139.178.68.195:55656). May 15 00:04:55.135202 kubelet[2630]: E0515 00:04:55.135083 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:04:55.401755 sshd[4221]: Accepted publickey for core from 139.178.68.195 port 55656 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:04:55.403619 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:04:55.409974 systemd-logind[1475]: New session 21 of user core. May 15 00:04:55.414170 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 00:04:55.739112 sshd[4223]: Connection closed by 139.178.68.195 port 55656 May 15 00:04:55.739881 sshd-session[4221]: pam_unix(sshd:session): session closed for user core May 15 00:04:55.745181 systemd[1]: sshd@20-172.237.148.154:22-139.178.68.195:55656.service: Deactivated successfully. May 15 00:04:55.748098 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:04:55.749253 systemd-logind[1475]: Session 21 logged out. Waiting for processes to exit. May 15 00:04:55.751283 systemd-logind[1475]: Removed session 21. May 15 00:04:57.135135 kubelet[2630]: E0515 00:04:57.135091 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:00.815294 systemd[1]: Started sshd@21-172.237.148.154:22-139.178.68.195:55670.service - OpenSSH per-connection server daemon (139.178.68.195:55670). May 15 00:05:01.147511 sshd[4235]: Accepted publickey for core from 139.178.68.195 port 55670 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:05:01.149363 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:01.154090 systemd-logind[1475]: New session 22 of user core. May 15 00:05:01.162179 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 00:05:01.459909 sshd[4237]: Connection closed by 139.178.68.195 port 55670 May 15 00:05:01.460603 sshd-session[4235]: pam_unix(sshd:session): session closed for user core May 15 00:05:01.465169 systemd[1]: sshd@21-172.237.148.154:22-139.178.68.195:55670.service: Deactivated successfully. May 15 00:05:01.467792 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:05:01.468689 systemd-logind[1475]: Session 22 logged out. Waiting for processes to exit. May 15 00:05:01.469689 systemd-logind[1475]: Removed session 22. May 15 00:05:06.535510 systemd[1]: Started sshd@22-172.237.148.154:22-139.178.68.195:51326.service - OpenSSH per-connection server daemon (139.178.68.195:51326). May 15 00:05:06.887253 sshd[4249]: Accepted publickey for core from 139.178.68.195 port 51326 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:05:06.889023 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:06.896206 systemd-logind[1475]: New session 23 of user core. May 15 00:05:06.907202 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 00:05:07.203901 sshd[4251]: Connection closed by 139.178.68.195 port 51326 May 15 00:05:07.204785 sshd-session[4249]: pam_unix(sshd:session): session closed for user core May 15 00:05:07.208997 systemd-logind[1475]: Session 23 logged out. Waiting for processes to exit. May 15 00:05:07.210267 systemd[1]: sshd@22-172.237.148.154:22-139.178.68.195:51326.service: Deactivated successfully. May 15 00:05:07.212912 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:05:07.214001 systemd-logind[1475]: Removed session 23. May 15 00:05:07.274266 systemd[1]: Started sshd@23-172.237.148.154:22-139.178.68.195:51340.service - OpenSSH per-connection server daemon (139.178.68.195:51340). May 15 00:05:07.698364 sshd[4263]: Accepted publickey for core from 139.178.68.195 port 51340 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:05:07.699199 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:07.703842 systemd-logind[1475]: New session 24 of user core. May 15 00:05:07.713167 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 00:05:09.222468 containerd[1498]: time="2025-05-15T00:05:09.221690094Z" level=info msg="StopContainer for \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\" with timeout 30 (s)" May 15 00:05:09.225443 containerd[1498]: time="2025-05-15T00:05:09.224573583Z" level=info msg="Stop container \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\" with signal terminated" May 15 00:05:09.261646 systemd[1]: cri-containerd-d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f.scope: Deactivated successfully. May 15 00:05:09.287404 containerd[1498]: time="2025-05-15T00:05:09.287304543Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:05:09.300808 containerd[1498]: time="2025-05-15T00:05:09.300462091Z" level=info msg="StopContainer for \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\" with timeout 2 (s)" May 15 00:05:09.301819 containerd[1498]: time="2025-05-15T00:05:09.301792841Z" level=info msg="Stop container \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\" with signal terminated" May 15 00:05:09.320736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f-rootfs.mount: Deactivated successfully. May 15 00:05:09.328258 systemd-networkd[1386]: lxc_health: Link DOWN May 15 00:05:09.328267 systemd-networkd[1386]: lxc_health: Lost carrier May 15 00:05:09.338056 containerd[1498]: time="2025-05-15T00:05:09.337342495Z" level=info msg="shim disconnected" id=d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f namespace=k8s.io May 15 00:05:09.338056 containerd[1498]: time="2025-05-15T00:05:09.337539715Z" level=warning msg="cleaning up after shim disconnected" id=d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f namespace=k8s.io May 15 00:05:09.338056 containerd[1498]: time="2025-05-15T00:05:09.337566385Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:05:09.359583 systemd[1]: cri-containerd-e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a.scope: Deactivated successfully. May 15 00:05:09.361149 systemd[1]: cri-containerd-e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a.scope: Consumed 8.819s CPU time, 124.1M memory peak, 136K read from disk, 13.3M written to disk. May 15 00:05:09.380597 containerd[1498]: time="2025-05-15T00:05:09.380557068Z" level=info msg="StopContainer for \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\" returns successfully" May 15 00:05:09.381892 containerd[1498]: time="2025-05-15T00:05:09.381771607Z" level=info msg="StopPodSandbox for \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\"" May 15 00:05:09.382782 containerd[1498]: time="2025-05-15T00:05:09.382108707Z" level=info msg="Container to stop \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:05:09.385776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af-shm.mount: Deactivated successfully. May 15 00:05:09.394776 systemd[1]: cri-containerd-05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af.scope: Deactivated successfully. May 15 00:05:09.409845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a-rootfs.mount: Deactivated successfully. May 15 00:05:09.418254 containerd[1498]: time="2025-05-15T00:05:09.418160791Z" level=info msg="shim disconnected" id=e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a namespace=k8s.io May 15 00:05:09.418416 containerd[1498]: time="2025-05-15T00:05:09.418252831Z" level=warning msg="cleaning up after shim disconnected" id=e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a namespace=k8s.io May 15 00:05:09.418416 containerd[1498]: time="2025-05-15T00:05:09.418261801Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:05:09.426514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af-rootfs.mount: Deactivated successfully. May 15 00:05:09.429591 containerd[1498]: time="2025-05-15T00:05:09.429398909Z" level=info msg="shim disconnected" id=05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af namespace=k8s.io May 15 00:05:09.429591 containerd[1498]: time="2025-05-15T00:05:09.429446359Z" level=warning msg="cleaning up after shim disconnected" id=05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af namespace=k8s.io May 15 00:05:09.429591 containerd[1498]: time="2025-05-15T00:05:09.429469629Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:05:09.446212 containerd[1498]: time="2025-05-15T00:05:09.446116497Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:05:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 00:05:09.456052 containerd[1498]: time="2025-05-15T00:05:09.455978895Z" level=info msg="StopContainer for \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\" returns successfully" May 15 00:05:09.458624 containerd[1498]: time="2025-05-15T00:05:09.458599405Z" level=info msg="StopPodSandbox for \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\"" May 15 00:05:09.458746 containerd[1498]: time="2025-05-15T00:05:09.458687775Z" level=info msg="Container to stop \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:05:09.458780 containerd[1498]: time="2025-05-15T00:05:09.458743545Z" level=info msg="Container to stop \"b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:05:09.458780 containerd[1498]: time="2025-05-15T00:05:09.458759225Z" level=info msg="Container to stop \"e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:05:09.458780 containerd[1498]: time="2025-05-15T00:05:09.458771535Z" level=info msg="Container to stop \"3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:05:09.458780 containerd[1498]: time="2025-05-15T00:05:09.458779815Z" level=info msg="Container to stop \"31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:05:09.462139 containerd[1498]: time="2025-05-15T00:05:09.462012994Z" level=info msg="TearDown network for sandbox \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\" successfully" May 15 00:05:09.462139 containerd[1498]: time="2025-05-15T00:05:09.462071564Z" level=info msg="StopPodSandbox for \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\" returns successfully" May 15 00:05:09.476785 systemd[1]: cri-containerd-f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985.scope: Deactivated successfully. May 15 00:05:09.515766 containerd[1498]: time="2025-05-15T00:05:09.515675775Z" level=info msg="shim disconnected" id=f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985 namespace=k8s.io May 15 00:05:09.515766 containerd[1498]: time="2025-05-15T00:05:09.515749105Z" level=warning msg="cleaning up after shim disconnected" id=f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985 namespace=k8s.io May 15 00:05:09.515766 containerd[1498]: time="2025-05-15T00:05:09.515758755Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:05:09.557373 containerd[1498]: time="2025-05-15T00:05:09.557277788Z" level=info msg="TearDown network for sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" successfully" May 15 00:05:09.557373 containerd[1498]: time="2025-05-15T00:05:09.557327718Z" level=info msg="StopPodSandbox for \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" returns successfully" May 15 00:05:09.576838 kubelet[2630]: I0515 00:05:09.574818 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-xtables-lock\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.576838 kubelet[2630]: I0515 00:05:09.574872 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-bpf-maps\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.576838 kubelet[2630]: I0515 00:05:09.574894 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-cni-path\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.576838 kubelet[2630]: I0515 00:05:09.574923 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-lib-modules\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.576838 kubelet[2630]: I0515 00:05:09.574987 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/114ade3f-b18a-4f94-975c-78f9bbcdd956-hubble-tls\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.576838 kubelet[2630]: I0515 00:05:09.575014 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-etc-cni-netd\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.577528 kubelet[2630]: I0515 00:05:09.575109 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b8d7a44-30c6-4a33-ac93-eaadcc355d9a-cilium-config-path\") pod \"8b8d7a44-30c6-4a33-ac93-eaadcc355d9a\" (UID: \"8b8d7a44-30c6-4a33-ac93-eaadcc355d9a\") " May 15 00:05:09.577528 kubelet[2630]: I0515 00:05:09.575131 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/114ade3f-b18a-4f94-975c-78f9bbcdd956-cilium-config-path\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.577528 kubelet[2630]: I0515 00:05:09.575146 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-cilium-run\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.577528 kubelet[2630]: I0515 00:05:09.575166 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z989\" (UniqueName: \"kubernetes.io/projected/114ade3f-b18a-4f94-975c-78f9bbcdd956-kube-api-access-6z989\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.577528 kubelet[2630]: I0515 00:05:09.575184 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-host-proc-sys-net\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.577528 kubelet[2630]: I0515 00:05:09.575201 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-host-proc-sys-kernel\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.577687 kubelet[2630]: I0515 00:05:09.575223 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-cilium-cgroup\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.577687 kubelet[2630]: I0515 00:05:09.575238 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-hostproc\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.577687 kubelet[2630]: I0515 00:05:09.575283 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n6pk\" (UniqueName: \"kubernetes.io/projected/8b8d7a44-30c6-4a33-ac93-eaadcc355d9a-kube-api-access-8n6pk\") pod \"8b8d7a44-30c6-4a33-ac93-eaadcc355d9a\" (UID: \"8b8d7a44-30c6-4a33-ac93-eaadcc355d9a\") " May 15 00:05:09.577687 kubelet[2630]: I0515 00:05:09.575322 2630 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/114ade3f-b18a-4f94-975c-78f9bbcdd956-clustermesh-secrets\") pod \"114ade3f-b18a-4f94-975c-78f9bbcdd956\" (UID: \"114ade3f-b18a-4f94-975c-78f9bbcdd956\") " May 15 00:05:09.578348 kubelet[2630]: I0515 00:05:09.578241 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:05:09.578424 kubelet[2630]: I0515 00:05:09.578404 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:05:09.578464 kubelet[2630]: I0515 00:05:09.578422 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-cni-path" (OuterVolumeSpecName: "cni-path") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:05:09.578464 kubelet[2630]: I0515 00:05:09.578437 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:05:09.579523 kubelet[2630]: I0515 00:05:09.579247 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:05:09.585817 kubelet[2630]: I0515 00:05:09.584555 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:05:09.585817 kubelet[2630]: I0515 00:05:09.584629 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:05:09.585817 kubelet[2630]: I0515 00:05:09.584653 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:05:09.585817 kubelet[2630]: I0515 00:05:09.584672 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-hostproc" (OuterVolumeSpecName: "hostproc") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:05:09.586885 kubelet[2630]: I0515 00:05:09.586857 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:05:09.592675 kubelet[2630]: I0515 00:05:09.592646 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/114ade3f-b18a-4f94-975c-78f9bbcdd956-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 00:05:09.597420 kubelet[2630]: I0515 00:05:09.592884 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8d7a44-30c6-4a33-ac93-eaadcc355d9a-kube-api-access-8n6pk" (OuterVolumeSpecName: "kube-api-access-8n6pk") pod "8b8d7a44-30c6-4a33-ac93-eaadcc355d9a" (UID: "8b8d7a44-30c6-4a33-ac93-eaadcc355d9a"). InnerVolumeSpecName "kube-api-access-8n6pk". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:05:09.597500 kubelet[2630]: I0515 00:05:09.594570 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/114ade3f-b18a-4f94-975c-78f9bbcdd956-kube-api-access-6z989" (OuterVolumeSpecName: "kube-api-access-6z989") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "kube-api-access-6z989". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:05:09.597547 kubelet[2630]: I0515 00:05:09.594953 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/114ade3f-b18a-4f94-975c-78f9bbcdd956-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 00:05:09.597600 kubelet[2630]: I0515 00:05:09.595108 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/114ade3f-b18a-4f94-975c-78f9bbcdd956-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "114ade3f-b18a-4f94-975c-78f9bbcdd956" (UID: "114ade3f-b18a-4f94-975c-78f9bbcdd956"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:05:09.597684 kubelet[2630]: I0515 00:05:09.596013 2630 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8d7a44-30c6-4a33-ac93-eaadcc355d9a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8b8d7a44-30c6-4a33-ac93-eaadcc355d9a" (UID: "8b8d7a44-30c6-4a33-ac93-eaadcc355d9a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 00:05:09.676246 kubelet[2630]: I0515 00:05:09.676175 2630 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/114ade3f-b18a-4f94-975c-78f9bbcdd956-clustermesh-secrets\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676246 kubelet[2630]: I0515 00:05:09.676230 2630 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-cilium-cgroup\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676246 kubelet[2630]: I0515 00:05:09.676246 2630 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-hostproc\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676246 kubelet[2630]: I0515 00:05:09.676256 2630 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8n6pk\" (UniqueName: \"kubernetes.io/projected/8b8d7a44-30c6-4a33-ac93-eaadcc355d9a-kube-api-access-8n6pk\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676246 kubelet[2630]: I0515 00:05:09.676266 2630 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-xtables-lock\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676575 kubelet[2630]: I0515 00:05:09.676277 2630 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-bpf-maps\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676575 kubelet[2630]: I0515 00:05:09.676284 2630 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-cni-path\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676575 kubelet[2630]: I0515 00:05:09.676292 2630 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-lib-modules\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676575 kubelet[2630]: I0515 00:05:09.676300 2630 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/114ade3f-b18a-4f94-975c-78f9bbcdd956-hubble-tls\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676575 kubelet[2630]: I0515 00:05:09.676308 2630 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/114ade3f-b18a-4f94-975c-78f9bbcdd956-cilium-config-path\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676575 kubelet[2630]: I0515 00:05:09.676324 2630 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-etc-cni-netd\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676575 kubelet[2630]: I0515 00:05:09.676332 2630 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b8d7a44-30c6-4a33-ac93-eaadcc355d9a-cilium-config-path\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676575 kubelet[2630]: I0515 00:05:09.676340 2630 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-host-proc-sys-net\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676773 kubelet[2630]: I0515 00:05:09.676348 2630 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-host-proc-sys-kernel\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676773 kubelet[2630]: I0515 00:05:09.676356 2630 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/114ade3f-b18a-4f94-975c-78f9bbcdd956-cilium-run\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.676773 kubelet[2630]: I0515 00:05:09.676364 2630 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6z989\" (UniqueName: \"kubernetes.io/projected/114ade3f-b18a-4f94-975c-78f9bbcdd956-kube-api-access-6z989\") on node \"172-237-148-154\" DevicePath \"\"" May 15 00:05:09.739614 kubelet[2630]: I0515 00:05:09.739443 2630 scope.go:117] "RemoveContainer" containerID="d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f" May 15 00:05:09.744303 containerd[1498]: time="2025-05-15T00:05:09.743604207Z" level=info msg="RemoveContainer for \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\"" May 15 00:05:09.751868 containerd[1498]: time="2025-05-15T00:05:09.751840846Z" level=info msg="RemoveContainer for \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\" returns successfully" May 15 00:05:09.753312 kubelet[2630]: I0515 00:05:09.753284 2630 scope.go:117] "RemoveContainer" containerID="d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f" May 15 00:05:09.753644 containerd[1498]: time="2025-05-15T00:05:09.753535626Z" level=error msg="ContainerStatus for \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\": not found" May 15 00:05:09.755411 systemd[1]: Removed slice kubepods-besteffort-pod8b8d7a44_30c6_4a33_ac93_eaadcc355d9a.slice - libcontainer container kubepods-besteffort-pod8b8d7a44_30c6_4a33_ac93_eaadcc355d9a.slice. May 15 00:05:09.756410 kubelet[2630]: E0515 00:05:09.756384 2630 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\": not found" containerID="d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f" May 15 00:05:09.756639 kubelet[2630]: I0515 00:05:09.756446 2630 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f"} err="failed to get container status \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3ff9001f278097d828da9490ba29e923be66c7e017719997ef9c09119594b1f\": not found" May 15 00:05:09.756639 kubelet[2630]: I0515 00:05:09.756631 2630 scope.go:117] "RemoveContainer" containerID="e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a" May 15 00:05:09.760844 containerd[1498]: time="2025-05-15T00:05:09.760369324Z" level=info msg="RemoveContainer for \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\"" May 15 00:05:09.761423 systemd[1]: Removed slice kubepods-burstable-pod114ade3f_b18a_4f94_975c_78f9bbcdd956.slice - libcontainer container kubepods-burstable-pod114ade3f_b18a_4f94_975c_78f9bbcdd956.slice. May 15 00:05:09.761532 systemd[1]: kubepods-burstable-pod114ade3f_b18a_4f94_975c_78f9bbcdd956.slice: Consumed 9.060s CPU time, 124.5M memory peak, 136K read from disk, 13.3M written to disk. May 15 00:05:09.765506 containerd[1498]: time="2025-05-15T00:05:09.765484394Z" level=info msg="RemoveContainer for \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\" returns successfully" May 15 00:05:09.765718 kubelet[2630]: I0515 00:05:09.765702 2630 scope.go:117] "RemoveContainer" containerID="e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb" May 15 00:05:09.766883 containerd[1498]: time="2025-05-15T00:05:09.766741563Z" level=info msg="RemoveContainer for \"e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb\"" May 15 00:05:09.770083 containerd[1498]: time="2025-05-15T00:05:09.770021393Z" level=info msg="RemoveContainer for \"e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb\" returns successfully" May 15 00:05:09.770532 kubelet[2630]: I0515 00:05:09.770301 2630 scope.go:117] "RemoveContainer" containerID="31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb" May 15 00:05:09.771773 containerd[1498]: time="2025-05-15T00:05:09.771699682Z" level=info msg="RemoveContainer for \"31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb\"" May 15 00:05:09.775957 containerd[1498]: time="2025-05-15T00:05:09.775927092Z" level=info msg="RemoveContainer for \"31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb\" returns successfully" May 15 00:05:09.777077 kubelet[2630]: I0515 00:05:09.776984 2630 scope.go:117] "RemoveContainer" containerID="b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148" May 15 00:05:09.778220 containerd[1498]: time="2025-05-15T00:05:09.778165961Z" level=info msg="RemoveContainer for \"b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148\"" May 15 00:05:09.780741 containerd[1498]: time="2025-05-15T00:05:09.780700301Z" level=info msg="RemoveContainer for \"b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148\" returns successfully" May 15 00:05:09.780842 kubelet[2630]: I0515 00:05:09.780825 2630 scope.go:117] "RemoveContainer" containerID="3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba" May 15 00:05:09.782690 containerd[1498]: time="2025-05-15T00:05:09.782596151Z" level=info msg="RemoveContainer for \"3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba\"" May 15 00:05:09.787471 containerd[1498]: time="2025-05-15T00:05:09.787442900Z" level=info msg="RemoveContainer for \"3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba\" returns successfully" May 15 00:05:09.787728 kubelet[2630]: I0515 00:05:09.787697 2630 scope.go:117] "RemoveContainer" containerID="e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a" May 15 00:05:09.788758 containerd[1498]: time="2025-05-15T00:05:09.788278350Z" level=error msg="ContainerStatus for \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\": not found" May 15 00:05:09.788838 kubelet[2630]: E0515 00:05:09.788406 2630 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\": not found" containerID="e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a" May 15 00:05:09.788838 kubelet[2630]: I0515 00:05:09.788434 2630 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a"} err="failed to get container status \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e55518c85a84c153ef75fb20e29fd6404bde5c5bc5a77b69c6d868e7282fc08a\": not found" May 15 00:05:09.788838 kubelet[2630]: I0515 00:05:09.788452 2630 scope.go:117] "RemoveContainer" containerID="e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb" May 15 00:05:09.789211 containerd[1498]: time="2025-05-15T00:05:09.789178020Z" level=error msg="ContainerStatus for \"e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb\": not found" May 15 00:05:09.789686 kubelet[2630]: E0515 00:05:09.789606 2630 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb\": not found" containerID="e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb" May 15 00:05:09.789686 kubelet[2630]: I0515 00:05:09.789635 2630 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb"} err="failed to get container status \"e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0ff28056bb2dd2b4711dd8258116ca1ce818aada0bcce4e7cc7acb660ec3dcb\": not found" May 15 00:05:09.789686 kubelet[2630]: I0515 00:05:09.789648 2630 scope.go:117] "RemoveContainer" containerID="31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb" May 15 00:05:09.790007 kubelet[2630]: E0515 00:05:09.789982 2630 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb\": not found" containerID="31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb" May 15 00:05:09.790093 containerd[1498]: time="2025-05-15T00:05:09.789794420Z" level=error msg="ContainerStatus for \"31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb\": not found" May 15 00:05:09.790119 kubelet[2630]: I0515 00:05:09.790000 2630 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb"} err="failed to get container status \"31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"31909e932c637cb42f9c0eeea313c83a13c2b6eb474deb9cb7f44c19d811b5cb\": not found" May 15 00:05:09.790119 kubelet[2630]: I0515 00:05:09.790045 2630 scope.go:117] "RemoveContainer" containerID="b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148" May 15 00:05:09.790367 containerd[1498]: time="2025-05-15T00:05:09.790331919Z" level=error msg="ContainerStatus for \"b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148\": not found" May 15 00:05:09.790595 kubelet[2630]: E0515 00:05:09.790468 2630 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148\": not found" containerID="b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148" May 15 00:05:09.790595 kubelet[2630]: I0515 00:05:09.790488 2630 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148"} err="failed to get container status \"b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148\": rpc error: code = NotFound desc = an error occurred when try to find container \"b266ba27e02a4cb067d8c596247849b0cf7a986db6085b0574cc03b38c321148\": not found" May 15 00:05:09.790595 kubelet[2630]: I0515 00:05:09.790502 2630 scope.go:117] "RemoveContainer" containerID="3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba" May 15 00:05:09.790762 containerd[1498]: time="2025-05-15T00:05:09.790653829Z" level=error msg="ContainerStatus for \"3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba\": not found" May 15 00:05:09.790814 kubelet[2630]: E0515 00:05:09.790798 2630 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba\": not found" containerID="3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba" May 15 00:05:09.790851 kubelet[2630]: I0515 00:05:09.790832 2630 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba"} err="failed to get container status \"3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e5b5f43f992414bf8351a86ea1b093483798b88d98d9e1d83f808a247b16eba\": not found" May 15 00:05:10.136992 kubelet[2630]: I0515 00:05:10.136838 2630 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="114ade3f-b18a-4f94-975c-78f9bbcdd956" path="/var/lib/kubelet/pods/114ade3f-b18a-4f94-975c-78f9bbcdd956/volumes" May 15 00:05:10.138025 kubelet[2630]: I0515 00:05:10.137856 2630 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8d7a44-30c6-4a33-ac93-eaadcc355d9a" path="/var/lib/kubelet/pods/8b8d7a44-30c6-4a33-ac93-eaadcc355d9a/volumes" May 15 00:05:10.264872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985-rootfs.mount: Deactivated successfully. May 15 00:05:10.265006 systemd[1]: var-lib-kubelet-pods-8b8d7a44\x2d30c6\x2d4a33\x2dac93\x2deaadcc355d9a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8n6pk.mount: Deactivated successfully. May 15 00:05:10.265115 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985-shm.mount: Deactivated successfully. May 15 00:05:10.265194 systemd[1]: var-lib-kubelet-pods-114ade3f\x2db18a\x2d4f94\x2d975c\x2d78f9bbcdd956-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6z989.mount: Deactivated successfully. May 15 00:05:10.265270 systemd[1]: var-lib-kubelet-pods-114ade3f\x2db18a\x2d4f94\x2d975c\x2d78f9bbcdd956-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:05:10.265350 systemd[1]: var-lib-kubelet-pods-114ade3f\x2db18a\x2d4f94\x2d975c\x2d78f9bbcdd956-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:05:11.207483 sshd[4265]: Connection closed by 139.178.68.195 port 51340 May 15 00:05:11.208435 sshd-session[4263]: pam_unix(sshd:session): session closed for user core May 15 00:05:11.211881 systemd[1]: sshd@23-172.237.148.154:22-139.178.68.195:51340.service: Deactivated successfully. May 15 00:05:11.214457 systemd[1]: session-24.scope: Deactivated successfully. May 15 00:05:11.216173 systemd-logind[1475]: Session 24 logged out. Waiting for processes to exit. May 15 00:05:11.217630 systemd-logind[1475]: Removed session 24. May 15 00:05:11.275251 systemd[1]: Started sshd@24-172.237.148.154:22-139.178.68.195:51342.service - OpenSSH per-connection server daemon (139.178.68.195:51342). May 15 00:05:11.643941 sshd[4421]: Accepted publickey for core from 139.178.68.195 port 51342 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:05:11.645487 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:11.650449 systemd-logind[1475]: New session 25 of user core. May 15 00:05:11.659150 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 00:05:12.142087 containerd[1498]: time="2025-05-15T00:05:12.141931310Z" level=info msg="StopPodSandbox for \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\"" May 15 00:05:12.143086 containerd[1498]: time="2025-05-15T00:05:12.142562930Z" level=info msg="TearDown network for sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" successfully" May 15 00:05:12.143086 containerd[1498]: time="2025-05-15T00:05:12.142586310Z" level=info msg="StopPodSandbox for \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" returns successfully" May 15 00:05:12.143537 containerd[1498]: time="2025-05-15T00:05:12.143335990Z" level=info msg="RemovePodSandbox for \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\"" May 15 00:05:12.143537 containerd[1498]: time="2025-05-15T00:05:12.143367590Z" level=info msg="Forcibly stopping sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\"" May 15 00:05:12.143537 containerd[1498]: time="2025-05-15T00:05:12.143432250Z" level=info msg="TearDown network for sandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" successfully" May 15 00:05:12.147090 containerd[1498]: time="2025-05-15T00:05:12.146841310Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:05:12.147090 containerd[1498]: time="2025-05-15T00:05:12.146876720Z" level=info msg="RemovePodSandbox \"f94300bd1716961ff0569d271cde12969cbd790c679ec04b8474f13159842985\" returns successfully" May 15 00:05:12.147669 containerd[1498]: time="2025-05-15T00:05:12.147574950Z" level=info msg="StopPodSandbox for \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\"" May 15 00:05:12.147838 containerd[1498]: time="2025-05-15T00:05:12.147790919Z" level=info msg="TearDown network for sandbox \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\" successfully" May 15 00:05:12.147838 containerd[1498]: time="2025-05-15T00:05:12.147807499Z" level=info msg="StopPodSandbox for \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\" returns successfully" May 15 00:05:12.148659 containerd[1498]: time="2025-05-15T00:05:12.148183859Z" level=info msg="RemovePodSandbox for \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\"" May 15 00:05:12.148659 containerd[1498]: time="2025-05-15T00:05:12.148202709Z" level=info msg="Forcibly stopping sandbox \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\"" May 15 00:05:12.148659 containerd[1498]: time="2025-05-15T00:05:12.148261929Z" level=info msg="TearDown network for sandbox \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\" successfully" May 15 00:05:12.151152 containerd[1498]: time="2025-05-15T00:05:12.151102189Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:05:12.151273 containerd[1498]: time="2025-05-15T00:05:12.151245549Z" level=info msg="RemovePodSandbox \"05ba2f060fa167799d2ac5e6019bd68e0d53a04f7fe5b512e2349714e82a70af\" returns successfully" May 15 00:05:12.331698 kubelet[2630]: I0515 00:05:12.331267 2630 memory_manager.go:355] "RemoveStaleState removing state" podUID="114ade3f-b18a-4f94-975c-78f9bbcdd956" containerName="cilium-agent" May 15 00:05:12.331698 kubelet[2630]: I0515 00:05:12.331305 2630 memory_manager.go:355] "RemoveStaleState removing state" podUID="8b8d7a44-30c6-4a33-ac93-eaadcc355d9a" containerName="cilium-operator" May 15 00:05:12.345724 sshd[4423]: Connection closed by 139.178.68.195 port 51342 May 15 00:05:12.345125 systemd[1]: Created slice kubepods-burstable-pod9ed93215_8487_4c48_881e_e05e6c809c45.slice - libcontainer container kubepods-burstable-pod9ed93215_8487_4c48_881e_e05e6c809c45.slice. May 15 00:05:12.344977 sshd-session[4421]: pam_unix(sshd:session): session closed for user core May 15 00:05:12.351930 kubelet[2630]: E0515 00:05:12.351856 2630 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:05:12.353150 systemd[1]: sshd@24-172.237.148.154:22-139.178.68.195:51342.service: Deactivated successfully. May 15 00:05:12.356511 systemd[1]: session-25.scope: Deactivated successfully. May 15 00:05:12.362189 systemd-logind[1475]: Session 25 logged out. Waiting for processes to exit. May 15 00:05:12.364959 systemd-logind[1475]: Removed session 25. May 15 00:05:12.393126 kubelet[2630]: I0515 00:05:12.392973 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ed93215-8487-4c48-881e-e05e6c809c45-lib-modules\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393126 kubelet[2630]: I0515 00:05:12.393059 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9ed93215-8487-4c48-881e-e05e6c809c45-cilium-ipsec-secrets\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393126 kubelet[2630]: I0515 00:05:12.393082 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ed93215-8487-4c48-881e-e05e6c809c45-bpf-maps\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393126 kubelet[2630]: I0515 00:05:12.393107 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ed93215-8487-4c48-881e-e05e6c809c45-cni-path\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393126 kubelet[2630]: I0515 00:05:12.393125 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ed93215-8487-4c48-881e-e05e6c809c45-host-proc-sys-net\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393374 kubelet[2630]: I0515 00:05:12.393140 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ed93215-8487-4c48-881e-e05e6c809c45-cilium-run\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393374 kubelet[2630]: I0515 00:05:12.393155 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ed93215-8487-4c48-881e-e05e6c809c45-xtables-lock\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393374 kubelet[2630]: I0515 00:05:12.393168 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ed93215-8487-4c48-881e-e05e6c809c45-hostproc\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393374 kubelet[2630]: I0515 00:05:12.393182 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ed93215-8487-4c48-881e-e05e6c809c45-cilium-cgroup\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393374 kubelet[2630]: I0515 00:05:12.393194 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ed93215-8487-4c48-881e-e05e6c809c45-hubble-tls\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393374 kubelet[2630]: I0515 00:05:12.393218 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ed93215-8487-4c48-881e-e05e6c809c45-clustermesh-secrets\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393529 kubelet[2630]: I0515 00:05:12.393240 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ed93215-8487-4c48-881e-e05e6c809c45-host-proc-sys-kernel\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393529 kubelet[2630]: I0515 00:05:12.393260 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czbbj\" (UniqueName: \"kubernetes.io/projected/9ed93215-8487-4c48-881e-e05e6c809c45-kube-api-access-czbbj\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393529 kubelet[2630]: I0515 00:05:12.393279 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ed93215-8487-4c48-881e-e05e6c809c45-etc-cni-netd\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.393529 kubelet[2630]: I0515 00:05:12.393296 2630 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ed93215-8487-4c48-881e-e05e6c809c45-cilium-config-path\") pod \"cilium-7wlqw\" (UID: \"9ed93215-8487-4c48-881e-e05e6c809c45\") " pod="kube-system/cilium-7wlqw" May 15 00:05:12.412274 systemd[1]: Started sshd@25-172.237.148.154:22-139.178.68.195:51352.service - OpenSSH per-connection server daemon (139.178.68.195:51352). May 15 00:05:12.652766 kubelet[2630]: E0515 00:05:12.652602 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:12.653651 containerd[1498]: time="2025-05-15T00:05:12.653594726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7wlqw,Uid:9ed93215-8487-4c48-881e-e05e6c809c45,Namespace:kube-system,Attempt:0,}" May 15 00:05:12.679487 containerd[1498]: time="2025-05-15T00:05:12.679207422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:05:12.679998 containerd[1498]: time="2025-05-15T00:05:12.679968452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:05:12.680989 containerd[1498]: time="2025-05-15T00:05:12.680940802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:05:12.681096 containerd[1498]: time="2025-05-15T00:05:12.681071232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:05:12.703167 systemd[1]: Started cri-containerd-202f9be3cf1aed1d6ab630d2ab3040ade60971e06083a6f2910c02ad65288b5e.scope - libcontainer container 202f9be3cf1aed1d6ab630d2ab3040ade60971e06083a6f2910c02ad65288b5e. May 15 00:05:12.729273 containerd[1498]: time="2025-05-15T00:05:12.729222214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7wlqw,Uid:9ed93215-8487-4c48-881e-e05e6c809c45,Namespace:kube-system,Attempt:0,} returns sandbox id \"202f9be3cf1aed1d6ab630d2ab3040ade60971e06083a6f2910c02ad65288b5e\"" May 15 00:05:12.731729 kubelet[2630]: E0515 00:05:12.731697 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:12.734363 containerd[1498]: time="2025-05-15T00:05:12.734331343Z" level=info msg="CreateContainer within sandbox \"202f9be3cf1aed1d6ab630d2ab3040ade60971e06083a6f2910c02ad65288b5e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:05:12.746334 containerd[1498]: time="2025-05-15T00:05:12.746279931Z" level=info msg="CreateContainer within sandbox \"202f9be3cf1aed1d6ab630d2ab3040ade60971e06083a6f2910c02ad65288b5e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"04441727a6f0472a71fd06049c0e46bdccd5c96754926bfea265aab45e9bb772\"" May 15 00:05:12.747968 containerd[1498]: time="2025-05-15T00:05:12.747879791Z" level=info msg="StartContainer for \"04441727a6f0472a71fd06049c0e46bdccd5c96754926bfea265aab45e9bb772\"" May 15 00:05:12.750673 sshd[4435]: Accepted publickey for core from 139.178.68.195 port 51352 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:05:12.753006 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:12.760190 systemd-logind[1475]: New session 26 of user core. May 15 00:05:12.766182 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 00:05:12.782163 systemd[1]: Started cri-containerd-04441727a6f0472a71fd06049c0e46bdccd5c96754926bfea265aab45e9bb772.scope - libcontainer container 04441727a6f0472a71fd06049c0e46bdccd5c96754926bfea265aab45e9bb772. May 15 00:05:12.819165 containerd[1498]: time="2025-05-15T00:05:12.819095409Z" level=info msg="StartContainer for \"04441727a6f0472a71fd06049c0e46bdccd5c96754926bfea265aab45e9bb772\" returns successfully" May 15 00:05:12.836501 systemd[1]: cri-containerd-04441727a6f0472a71fd06049c0e46bdccd5c96754926bfea265aab45e9bb772.scope: Deactivated successfully. May 15 00:05:12.870066 containerd[1498]: time="2025-05-15T00:05:12.869192161Z" level=info msg="shim disconnected" id=04441727a6f0472a71fd06049c0e46bdccd5c96754926bfea265aab45e9bb772 namespace=k8s.io May 15 00:05:12.870066 containerd[1498]: time="2025-05-15T00:05:12.869270191Z" level=warning msg="cleaning up after shim disconnected" id=04441727a6f0472a71fd06049c0e46bdccd5c96754926bfea265aab45e9bb772 namespace=k8s.io May 15 00:05:12.870066 containerd[1498]: time="2025-05-15T00:05:12.869300851Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:05:12.887336 containerd[1498]: time="2025-05-15T00:05:12.887287178Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:05:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 00:05:13.000847 sshd[4497]: Connection closed by 139.178.68.195 port 51352 May 15 00:05:13.001388 sshd-session[4435]: pam_unix(sshd:session): session closed for user core May 15 00:05:13.007455 systemd-logind[1475]: Session 26 logged out. Waiting for processes to exit. May 15 00:05:13.007875 systemd[1]: sshd@25-172.237.148.154:22-139.178.68.195:51352.service: Deactivated successfully. May 15 00:05:13.010802 systemd[1]: session-26.scope: Deactivated successfully. May 15 00:05:13.012311 systemd-logind[1475]: Removed session 26. May 15 00:05:13.067504 systemd[1]: Started sshd@26-172.237.148.154:22-139.178.68.195:51366.service - OpenSSH per-connection server daemon (139.178.68.195:51366). May 15 00:05:13.394796 sshd[4552]: Accepted publickey for core from 139.178.68.195 port 51366 ssh2: RSA SHA256:NSU0GurrMFtoZTVyzurv6gpe6y18ZWWjq4qFgEsag78 May 15 00:05:13.396788 sshd-session[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:05:13.401971 systemd-logind[1475]: New session 27 of user core. May 15 00:05:13.407163 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 00:05:13.764492 kubelet[2630]: E0515 00:05:13.764441 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:13.771646 containerd[1498]: time="2025-05-15T00:05:13.768963574Z" level=info msg="CreateContainer within sandbox \"202f9be3cf1aed1d6ab630d2ab3040ade60971e06083a6f2910c02ad65288b5e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:05:13.791041 containerd[1498]: time="2025-05-15T00:05:13.790956440Z" level=info msg="CreateContainer within sandbox \"202f9be3cf1aed1d6ab630d2ab3040ade60971e06083a6f2910c02ad65288b5e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bbdb34e25074e6b0e36697ef5ef4cf6c00cd4df6a7c0d59b3cc2925acf00b164\"" May 15 00:05:13.794144 containerd[1498]: time="2025-05-15T00:05:13.791674850Z" level=info msg="StartContainer for \"bbdb34e25074e6b0e36697ef5ef4cf6c00cd4df6a7c0d59b3cc2925acf00b164\"" May 15 00:05:13.878166 systemd[1]: Started cri-containerd-bbdb34e25074e6b0e36697ef5ef4cf6c00cd4df6a7c0d59b3cc2925acf00b164.scope - libcontainer container bbdb34e25074e6b0e36697ef5ef4cf6c00cd4df6a7c0d59b3cc2925acf00b164. May 15 00:05:13.918399 containerd[1498]: time="2025-05-15T00:05:13.918339490Z" level=info msg="StartContainer for \"bbdb34e25074e6b0e36697ef5ef4cf6c00cd4df6a7c0d59b3cc2925acf00b164\" returns successfully" May 15 00:05:13.928622 systemd[1]: cri-containerd-bbdb34e25074e6b0e36697ef5ef4cf6c00cd4df6a7c0d59b3cc2925acf00b164.scope: Deactivated successfully. May 15 00:05:13.986615 containerd[1498]: time="2025-05-15T00:05:13.986485338Z" level=info msg="shim disconnected" id=bbdb34e25074e6b0e36697ef5ef4cf6c00cd4df6a7c0d59b3cc2925acf00b164 namespace=k8s.io May 15 00:05:13.986615 containerd[1498]: time="2025-05-15T00:05:13.986564088Z" level=warning msg="cleaning up after shim disconnected" id=bbdb34e25074e6b0e36697ef5ef4cf6c00cd4df6a7c0d59b3cc2925acf00b164 namespace=k8s.io May 15 00:05:13.986615 containerd[1498]: time="2025-05-15T00:05:13.986576548Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:05:14.135901 kubelet[2630]: E0515 00:05:14.134736 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-rc7ct" podUID="0f58d263-bf5c-4f84-8ccf-9304975e78cf" May 15 00:05:14.502009 systemd[1]: run-containerd-runc-k8s.io-bbdb34e25074e6b0e36697ef5ef4cf6c00cd4df6a7c0d59b3cc2925acf00b164-runc.azGf9p.mount: Deactivated successfully. May 15 00:05:14.502193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbdb34e25074e6b0e36697ef5ef4cf6c00cd4df6a7c0d59b3cc2925acf00b164-rootfs.mount: Deactivated successfully. May 15 00:05:14.767361 kubelet[2630]: E0515 00:05:14.767240 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:14.771070 containerd[1498]: time="2025-05-15T00:05:14.770898701Z" level=info msg="CreateContainer within sandbox \"202f9be3cf1aed1d6ab630d2ab3040ade60971e06083a6f2910c02ad65288b5e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:05:14.787756 containerd[1498]: time="2025-05-15T00:05:14.787730318Z" level=info msg="CreateContainer within sandbox \"202f9be3cf1aed1d6ab630d2ab3040ade60971e06083a6f2910c02ad65288b5e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"af76ca157b0536bb726eb24301656fd54ca50b1a79ac5d67f03ac1637eef7b28\"" May 15 00:05:14.796867 containerd[1498]: time="2025-05-15T00:05:14.796843667Z" level=info msg="StartContainer for \"af76ca157b0536bb726eb24301656fd54ca50b1a79ac5d67f03ac1637eef7b28\"" May 15 00:05:14.797024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4160807184.mount: Deactivated successfully. May 15 00:05:14.831479 kubelet[2630]: I0515 00:05:14.830111 2630 setters.go:602] "Node became not ready" node="172-237-148-154" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T00:05:14Z","lastTransitionTime":"2025-05-15T00:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 00:05:14.847208 systemd[1]: Started cri-containerd-af76ca157b0536bb726eb24301656fd54ca50b1a79ac5d67f03ac1637eef7b28.scope - libcontainer container af76ca157b0536bb726eb24301656fd54ca50b1a79ac5d67f03ac1637eef7b28. May 15 00:05:14.896676 containerd[1498]: time="2025-05-15T00:05:14.896522171Z" level=info msg="StartContainer for \"af76ca157b0536bb726eb24301656fd54ca50b1a79ac5d67f03ac1637eef7b28\" returns successfully" May 15 00:05:14.897932 systemd[1]: cri-containerd-af76ca157b0536bb726eb24301656fd54ca50b1a79ac5d67f03ac1637eef7b28.scope: Deactivated successfully. May 15 00:05:14.925772 containerd[1498]: time="2025-05-15T00:05:14.925691116Z" level=info msg="shim disconnected" id=af76ca157b0536bb726eb24301656fd54ca50b1a79ac5d67f03ac1637eef7b28 namespace=k8s.io May 15 00:05:14.925772 containerd[1498]: time="2025-05-15T00:05:14.925757266Z" level=warning msg="cleaning up after shim disconnected" id=af76ca157b0536bb726eb24301656fd54ca50b1a79ac5d67f03ac1637eef7b28 namespace=k8s.io May 15 00:05:14.925772 containerd[1498]: time="2025-05-15T00:05:14.925768046Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:05:15.690684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af76ca157b0536bb726eb24301656fd54ca50b1a79ac5d67f03ac1637eef7b28-rootfs.mount: Deactivated successfully. May 15 00:05:15.771986 kubelet[2630]: E0515 00:05:15.771448 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:15.775175 containerd[1498]: time="2025-05-15T00:05:15.775114704Z" level=info msg="CreateContainer within sandbox \"202f9be3cf1aed1d6ab630d2ab3040ade60971e06083a6f2910c02ad65288b5e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:05:15.789048 containerd[1498]: time="2025-05-15T00:05:15.788886471Z" level=info msg="CreateContainer within sandbox \"202f9be3cf1aed1d6ab630d2ab3040ade60971e06083a6f2910c02ad65288b5e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b26345a529f5f1b877aeffd930a20d8da587a2a87cf6b82adff686512e3a64ac\"" May 15 00:05:15.792682 containerd[1498]: time="2025-05-15T00:05:15.790399620Z" level=info msg="StartContainer for \"b26345a529f5f1b877aeffd930a20d8da587a2a87cf6b82adff686512e3a64ac\"" May 15 00:05:15.834630 systemd[1]: run-containerd-runc-k8s.io-b26345a529f5f1b877aeffd930a20d8da587a2a87cf6b82adff686512e3a64ac-runc.PR7lLS.mount: Deactivated successfully. May 15 00:05:15.844183 systemd[1]: Started cri-containerd-b26345a529f5f1b877aeffd930a20d8da587a2a87cf6b82adff686512e3a64ac.scope - libcontainer container b26345a529f5f1b877aeffd930a20d8da587a2a87cf6b82adff686512e3a64ac. May 15 00:05:15.873887 systemd[1]: cri-containerd-b26345a529f5f1b877aeffd930a20d8da587a2a87cf6b82adff686512e3a64ac.scope: Deactivated successfully. May 15 00:05:15.875527 containerd[1498]: time="2025-05-15T00:05:15.875112262Z" level=info msg="StartContainer for \"b26345a529f5f1b877aeffd930a20d8da587a2a87cf6b82adff686512e3a64ac\" returns successfully" May 15 00:05:15.910082 containerd[1498]: time="2025-05-15T00:05:15.909949097Z" level=info msg="shim disconnected" id=b26345a529f5f1b877aeffd930a20d8da587a2a87cf6b82adff686512e3a64ac namespace=k8s.io May 15 00:05:15.910638 containerd[1498]: time="2025-05-15T00:05:15.910019187Z" level=warning msg="cleaning up after shim disconnected" id=b26345a529f5f1b877aeffd930a20d8da587a2a87cf6b82adff686512e3a64ac namespace=k8s.io May 15 00:05:15.910638 containerd[1498]: time="2025-05-15T00:05:15.910530836Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:05:16.377070 kubelet[2630]: E0515 00:05:16.374279 2630 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-rc7ct" podUID="0f58d263-bf5c-4f84-8ccf-9304975e78cf" May 15 00:05:16.687543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b26345a529f5f1b877aeffd930a20d8da587a2a87cf6b82adff686512e3a64ac-rootfs.mount: Deactivated successfully. May 15 00:05:16.776196 kubelet[2630]: E0515 00:05:16.776158 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:16.779570 containerd[1498]: time="2025-05-15T00:05:16.779535828Z" level=info msg="CreateContainer within sandbox \"202f9be3cf1aed1d6ab630d2ab3040ade60971e06083a6f2910c02ad65288b5e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:05:16.808431 containerd[1498]: time="2025-05-15T00:05:16.808393264Z" level=info msg="CreateContainer within sandbox \"202f9be3cf1aed1d6ab630d2ab3040ade60971e06083a6f2910c02ad65288b5e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5e8aac4f8a428ed515960b78149a9c1c409f3c09ca545d44a7fdc3f9af21f666\"" May 15 00:05:16.809163 containerd[1498]: time="2025-05-15T00:05:16.809122154Z" level=info msg="StartContainer for \"5e8aac4f8a428ed515960b78149a9c1c409f3c09ca545d44a7fdc3f9af21f666\"" May 15 00:05:16.840747 systemd[1]: Started cri-containerd-5e8aac4f8a428ed515960b78149a9c1c409f3c09ca545d44a7fdc3f9af21f666.scope - libcontainer container 5e8aac4f8a428ed515960b78149a9c1c409f3c09ca545d44a7fdc3f9af21f666. May 15 00:05:16.872837 containerd[1498]: time="2025-05-15T00:05:16.872787007Z" level=info msg="StartContainer for \"5e8aac4f8a428ed515960b78149a9c1c409f3c09ca545d44a7fdc3f9af21f666\" returns successfully" May 15 00:05:17.342714 update_engine[1476]: I20250515 00:05:17.342563 1476 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 00:05:17.342714 update_engine[1476]: I20250515 00:05:17.342710 1476 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 00:05:17.344284 update_engine[1476]: I20250515 00:05:17.344251 1476 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 00:05:17.345913 update_engine[1476]: I20250515 00:05:17.345877 1476 omaha_request_params.cc:62] Current group set to beta May 15 00:05:17.346606 update_engine[1476]: I20250515 00:05:17.346163 1476 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 00:05:17.346606 update_engine[1476]: I20250515 00:05:17.346179 1476 update_attempter.cc:643] Scheduling an action processor start. May 15 00:05:17.346606 update_engine[1476]: I20250515 00:05:17.346206 1476 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 00:05:17.346606 update_engine[1476]: I20250515 00:05:17.346292 1476 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 00:05:17.346606 update_engine[1476]: I20250515 00:05:17.346380 1476 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 00:05:17.346606 update_engine[1476]: I20250515 00:05:17.346390 1476 omaha_request_action.cc:272] Request: May 15 00:05:17.346606 update_engine[1476]: May 15 00:05:17.346606 update_engine[1476]: May 15 00:05:17.346606 update_engine[1476]: May 15 00:05:17.346606 update_engine[1476]: May 15 00:05:17.346606 update_engine[1476]: May 15 00:05:17.346606 update_engine[1476]: May 15 00:05:17.346606 update_engine[1476]: May 15 00:05:17.346606 update_engine[1476]: May 15 00:05:17.346606 update_engine[1476]: I20250515 00:05:17.346401 1476 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 00:05:17.348446 update_engine[1476]: I20250515 00:05:17.348323 1476 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 00:05:17.348640 locksmithd[1503]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 00:05:17.349138 update_engine[1476]: I20250515 00:05:17.349084 1476 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 00:05:17.354059 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 15 00:05:17.395287 update_engine[1476]: E20250515 00:05:17.395154 1476 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 00:05:17.395287 update_engine[1476]: I20250515 00:05:17.395257 1476 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 00:05:17.786900 kubelet[2630]: E0515 00:05:17.786835 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:17.808632 kubelet[2630]: I0515 00:05:17.808537 2630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7wlqw" podStartSLOduration=5.80848617 podStartE2EDuration="5.80848617s" podCreationTimestamp="2025-05-15 00:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:05:17.8078531 +0000 UTC m=+185.837971916" watchObservedRunningTime="2025-05-15 00:05:17.80848617 +0000 UTC m=+185.838604996" May 15 00:05:18.136210 kubelet[2630]: E0515 00:05:18.135386 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:18.790227 kubelet[2630]: E0515 00:05:18.789061 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:19.790512 kubelet[2630]: E0515 00:05:19.790418 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:20.471114 systemd-networkd[1386]: lxc_health: Link UP May 15 00:05:20.475172 systemd-networkd[1386]: lxc_health: Gained carrier May 15 00:05:20.911179 kubelet[2630]: E0515 00:05:20.910602 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:21.819411 kubelet[2630]: E0515 00:05:21.818185 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:22.019248 systemd-networkd[1386]: lxc_health: Gained IPv6LL May 15 00:05:22.822806 kubelet[2630]: E0515 00:05:22.822759 2630 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 15 00:05:24.252518 systemd[1]: run-containerd-runc-k8s.io-5e8aac4f8a428ed515960b78149a9c1c409f3c09ca545d44a7fdc3f9af21f666-runc.YIqmVg.mount: Deactivated successfully. May 15 00:05:26.624480 sshd[4554]: Connection closed by 139.178.68.195 port 51366 May 15 00:05:26.626785 sshd-session[4552]: pam_unix(sshd:session): session closed for user core May 15 00:05:26.633553 systemd[1]: sshd@26-172.237.148.154:22-139.178.68.195:51366.service: Deactivated successfully. May 15 00:05:26.637607 systemd[1]: session-27.scope: Deactivated successfully. May 15 00:05:26.639646 systemd-logind[1475]: Session 27 logged out. Waiting for processes to exit. May 15 00:05:26.641833 systemd-logind[1475]: Removed session 27. May 15 00:05:27.342200 update_engine[1476]: I20250515 00:05:27.341989 1476 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 00:05:27.342706 update_engine[1476]: I20250515 00:05:27.342695 1476 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 00:05:27.343216 update_engine[1476]: I20250515 00:05:27.343185 1476 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 00:05:27.344352 update_engine[1476]: E20250515 00:05:27.344225 1476 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 00:05:27.344602 update_engine[1476]: I20250515 00:05:27.344377 1476 libcurl_http_fetcher.cc:283] No HTTP response, retry 2