May 27 03:09:56.849668 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:09:56.849700 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:09:56.849727 kernel: BIOS-provided physical RAM map: May 27 03:09:56.849737 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 27 03:09:56.849745 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 27 03:09:56.849754 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 03:09:56.849764 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 27 03:09:56.849777 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 27 03:09:56.849791 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 27 03:09:56.849800 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 27 03:09:56.849809 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:09:56.849818 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 03:09:56.849826 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:09:56.849835 kernel: NX (Execute Disable) protection: active May 27 03:09:56.849850 kernel: APIC: Static calls initialized May 27 03:09:56.849859 kernel: SMBIOS 2.8 present. May 27 03:09:56.849873 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 27 03:09:56.849882 kernel: DMI: Memory slots populated: 1/1 May 27 03:09:56.849892 kernel: Hypervisor detected: KVM May 27 03:09:56.849902 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:09:56.849911 kernel: kvm-clock: using sched offset of 4378801418 cycles May 27 03:09:56.849921 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:09:56.849932 kernel: tsc: Detected 2794.748 MHz processor May 27 03:09:56.849942 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:09:56.849956 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:09:56.849966 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 27 03:09:56.849976 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 03:09:56.849986 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:09:56.849996 kernel: Using GB pages for direct mapping May 27 03:09:56.850006 kernel: ACPI: Early table checksum verification disabled May 27 03:09:56.850016 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 27 03:09:56.850026 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:09:56.850039 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:09:56.850049 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:09:56.850059 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 27 03:09:56.850070 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:09:56.850080 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:09:56.850090 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:09:56.850100 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:09:56.850111 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] May 27 03:09:56.850128 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] May 27 03:09:56.850138 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 27 03:09:56.850149 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] May 27 03:09:56.850160 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] May 27 03:09:56.850170 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] May 27 03:09:56.850181 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] May 27 03:09:56.850194 kernel: No NUMA configuration found May 27 03:09:56.850204 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 27 03:09:56.850215 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] May 27 03:09:56.850225 kernel: Zone ranges: May 27 03:09:56.850236 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:09:56.850247 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 27 03:09:56.850257 kernel: Normal empty May 27 03:09:56.850268 kernel: Device empty May 27 03:09:56.850278 kernel: Movable zone start for each node May 27 03:09:56.850288 kernel: Early memory node ranges May 27 03:09:56.850302 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 03:09:56.850312 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 27 03:09:56.850323 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 27 03:09:56.850333 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:09:56.850344 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 03:09:56.850355 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 27 03:09:56.850365 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 03:09:56.850379 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:09:56.850390 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:09:56.850403 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 03:09:56.850414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:09:56.850427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:09:56.850438 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:09:56.850448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:09:56.850459 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:09:56.850470 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:09:56.850480 kernel: TSC deadline timer available May 27 03:09:56.850490 kernel: CPU topo: Max. logical packages: 1 May 27 03:09:56.850504 kernel: CPU topo: Max. logical dies: 1 May 27 03:09:56.850514 kernel: CPU topo: Max. dies per package: 1 May 27 03:09:56.850524 kernel: CPU topo: Max. threads per core: 1 May 27 03:09:56.850535 kernel: CPU topo: Num. cores per package: 4 May 27 03:09:56.850546 kernel: CPU topo: Num. threads per package: 4 May 27 03:09:56.850556 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 27 03:09:56.850566 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:09:56.850577 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 03:09:56.850588 kernel: kvm-guest: setup PV sched yield May 27 03:09:56.850607 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 27 03:09:56.850620 kernel: Booting paravirtualized kernel on KVM May 27 03:09:56.850632 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:09:56.850642 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 27 03:09:56.850653 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 27 03:09:56.850664 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 27 03:09:56.850674 kernel: pcpu-alloc: [0] 0 1 2 3 May 27 03:09:56.850684 kernel: kvm-guest: PV spinlocks enabled May 27 03:09:56.850694 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:09:56.850706 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:09:56.850734 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:09:56.850744 kernel: random: crng init done May 27 03:09:56.850754 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:09:56.850765 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:09:56.850774 kernel: Fallback order for Node 0: 0 May 27 03:09:56.850785 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 May 27 03:09:56.850805 kernel: Policy zone: DMA32 May 27 03:09:56.850825 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:09:56.850840 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 03:09:56.850850 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:09:56.850860 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:09:56.850870 kernel: Dynamic Preempt: voluntary May 27 03:09:56.850880 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:09:56.850895 kernel: rcu: RCU event tracing is enabled. May 27 03:09:56.850906 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 03:09:56.850916 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:09:56.850930 kernel: Rude variant of Tasks RCU enabled. May 27 03:09:56.850945 kernel: Tracing variant of Tasks RCU enabled. May 27 03:09:56.850955 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:09:56.850966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 03:09:56.850977 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:09:56.850988 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:09:56.850998 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:09:56.851009 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 27 03:09:56.851020 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:09:56.851041 kernel: Console: colour VGA+ 80x25 May 27 03:09:56.851052 kernel: printk: legacy console [ttyS0] enabled May 27 03:09:56.851063 kernel: ACPI: Core revision 20240827 May 27 03:09:56.851074 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 03:09:56.851088 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:09:56.851099 kernel: x2apic enabled May 27 03:09:56.851113 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:09:56.851124 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 03:09:56.851135 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 03:09:56.851149 kernel: kvm-guest: setup PV IPIs May 27 03:09:56.851160 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 03:09:56.851171 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:09:56.851182 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 27 03:09:56.851192 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 03:09:56.851203 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 03:09:56.851212 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 03:09:56.851223 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:09:56.851233 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:09:56.851247 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:09:56.851258 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 03:09:56.851269 kernel: RETBleed: Mitigation: untrained return thunk May 27 03:09:56.851280 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 03:09:56.851291 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 03:09:56.851302 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 03:09:56.851315 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 03:09:56.851325 kernel: x86/bugs: return thunk changed May 27 03:09:56.851339 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 03:09:56.851349 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:09:56.851360 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:09:56.851370 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:09:56.851380 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:09:56.851391 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 03:09:56.851401 kernel: Freeing SMP alternatives memory: 32K May 27 03:09:56.851411 kernel: pid_max: default: 32768 minimum: 301 May 27 03:09:56.851424 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:09:56.851439 kernel: landlock: Up and running. May 27 03:09:56.851452 kernel: SELinux: Initializing. May 27 03:09:56.851463 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:09:56.851477 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:09:56.851489 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 03:09:56.851500 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 03:09:56.851510 kernel: ... version: 0 May 27 03:09:56.851521 kernel: ... bit width: 48 May 27 03:09:56.851532 kernel: ... generic registers: 6 May 27 03:09:56.851545 kernel: ... value mask: 0000ffffffffffff May 27 03:09:56.851556 kernel: ... max period: 00007fffffffffff May 27 03:09:56.851567 kernel: ... fixed-purpose events: 0 May 27 03:09:56.851578 kernel: ... event mask: 000000000000003f May 27 03:09:56.851588 kernel: signal: max sigframe size: 1776 May 27 03:09:56.851609 kernel: rcu: Hierarchical SRCU implementation. May 27 03:09:56.851620 kernel: rcu: Max phase no-delay instances is 400. May 27 03:09:56.851631 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:09:56.851642 kernel: smp: Bringing up secondary CPUs ... May 27 03:09:56.851656 kernel: smpboot: x86: Booting SMP configuration: May 27 03:09:56.851667 kernel: .... node #0, CPUs: #1 #2 #3 May 27 03:09:56.851678 kernel: smp: Brought up 1 node, 4 CPUs May 27 03:09:56.851689 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 27 03:09:56.851701 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 136904K reserved, 0K cma-reserved) May 27 03:09:56.851727 kernel: devtmpfs: initialized May 27 03:09:56.851739 kernel: x86/mm: Memory block size: 128MB May 27 03:09:56.851750 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:09:56.851761 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 03:09:56.851776 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:09:56.851787 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:09:56.851798 kernel: audit: initializing netlink subsys (disabled) May 27 03:09:56.851809 kernel: audit: type=2000 audit(1748315393.893:1): state=initialized audit_enabled=0 res=1 May 27 03:09:56.851820 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:09:56.851831 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:09:56.851842 kernel: cpuidle: using governor menu May 27 03:09:56.851853 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:09:56.851864 kernel: dca service started, version 1.12.1 May 27 03:09:56.851878 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 27 03:09:56.851889 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 27 03:09:56.851900 kernel: PCI: Using configuration type 1 for base access May 27 03:09:56.851912 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:09:56.851923 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:09:56.851934 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:09:56.851945 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:09:56.851956 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:09:56.851967 kernel: ACPI: Added _OSI(Module Device) May 27 03:09:56.851980 kernel: ACPI: Added _OSI(Processor Device) May 27 03:09:56.851991 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:09:56.852002 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:09:56.852013 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:09:56.852024 kernel: ACPI: Interpreter enabled May 27 03:09:56.852035 kernel: ACPI: PM: (supports S0 S3 S5) May 27 03:09:56.852046 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:09:56.852057 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:09:56.852068 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:09:56.852082 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 03:09:56.852093 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:09:56.852318 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 03:09:56.852476 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 03:09:56.852639 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 03:09:56.852654 kernel: PCI host bridge to bus 0000:00 May 27 03:09:56.852931 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:09:56.853082 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:09:56.853228 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:09:56.853367 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 27 03:09:56.853506 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 27 03:09:56.853654 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 27 03:09:56.853849 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:09:56.854024 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 03:09:56.854195 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 03:09:56.854350 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 27 03:09:56.854501 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 27 03:09:56.854666 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 27 03:09:56.854840 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:09:56.855008 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 03:09:56.855169 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] May 27 03:09:56.855323 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 27 03:09:56.855477 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 27 03:09:56.855658 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 03:09:56.855834 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] May 27 03:09:56.855992 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 27 03:09:56.856147 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 27 03:09:56.856324 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 03:09:56.856484 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] May 27 03:09:56.856651 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] May 27 03:09:56.856827 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] May 27 03:09:56.856983 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 27 03:09:56.857146 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 03:09:56.857300 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 03:09:56.857475 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 03:09:56.857639 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] May 27 03:09:56.857812 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] May 27 03:09:56.857985 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 03:09:56.858145 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 27 03:09:56.858160 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:09:56.858172 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:09:56.858188 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:09:56.858199 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:09:56.858210 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 03:09:56.858221 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 03:09:56.858232 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 03:09:56.858243 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 03:09:56.858254 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 03:09:56.858265 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 03:09:56.858276 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 03:09:56.858290 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 03:09:56.858301 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 03:09:56.858312 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 03:09:56.858323 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 03:09:56.858334 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 03:09:56.858345 kernel: iommu: Default domain type: Translated May 27 03:09:56.858356 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:09:56.858367 kernel: PCI: Using ACPI for IRQ routing May 27 03:09:56.858378 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:09:56.858391 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 27 03:09:56.858402 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 27 03:09:56.858560 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 03:09:56.858743 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 03:09:56.858900 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:09:56.858915 kernel: vgaarb: loaded May 27 03:09:56.858926 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 03:09:56.858937 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 03:09:56.858953 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:09:56.858965 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:09:56.858976 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:09:56.858987 kernel: pnp: PnP ACPI init May 27 03:09:56.859154 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 27 03:09:56.859170 kernel: pnp: PnP ACPI: found 6 devices May 27 03:09:56.859182 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:09:56.859193 kernel: NET: Registered PF_INET protocol family May 27 03:09:56.859208 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:09:56.859220 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 03:09:56.859231 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:09:56.859242 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:09:56.859253 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 03:09:56.859264 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 03:09:56.859275 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:09:56.859286 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:09:56.859298 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:09:56.859311 kernel: NET: Registered PF_XDP protocol family May 27 03:09:56.859451 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:09:56.859591 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:09:56.859761 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:09:56.859902 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 27 03:09:56.860040 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 27 03:09:56.860181 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 27 03:09:56.860196 kernel: PCI: CLS 0 bytes, default 64 May 27 03:09:56.860212 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:09:56.860224 kernel: Initialise system trusted keyrings May 27 03:09:56.860235 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 03:09:56.860246 kernel: Key type asymmetric registered May 27 03:09:56.860257 kernel: Asymmetric key parser 'x509' registered May 27 03:09:56.860268 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:09:56.860279 kernel: io scheduler mq-deadline registered May 27 03:09:56.860291 kernel: io scheduler kyber registered May 27 03:09:56.860302 kernel: io scheduler bfq registered May 27 03:09:56.860315 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:09:56.860327 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 03:09:56.860338 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 03:09:56.860349 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 03:09:56.860360 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:09:56.860372 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:09:56.860383 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:09:56.860394 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:09:56.860405 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:09:56.860419 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 03:09:56.860611 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 03:09:56.860776 kernel: rtc_cmos 00:04: registered as rtc0 May 27 03:09:56.860917 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T03:09:56 UTC (1748315396) May 27 03:09:56.861052 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 27 03:09:56.861066 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 03:09:56.861077 kernel: NET: Registered PF_INET6 protocol family May 27 03:09:56.861088 kernel: Segment Routing with IPv6 May 27 03:09:56.861104 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:09:56.861115 kernel: NET: Registered PF_PACKET protocol family May 27 03:09:56.861126 kernel: Key type dns_resolver registered May 27 03:09:56.861137 kernel: IPI shorthand broadcast: enabled May 27 03:09:56.861148 kernel: sched_clock: Marking stable (2946005589, 124608773)->(3105028567, -34414205) May 27 03:09:56.861159 kernel: registered taskstats version 1 May 27 03:09:56.861170 kernel: Loading compiled-in X.509 certificates May 27 03:09:56.861181 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:09:56.861192 kernel: Demotion targets for Node 0: null May 27 03:09:56.861206 kernel: Key type .fscrypt registered May 27 03:09:56.861217 kernel: Key type fscrypt-provisioning registered May 27 03:09:56.861228 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:09:56.861239 kernel: ima: Allocated hash algorithm: sha1 May 27 03:09:56.861250 kernel: ima: No architecture policies found May 27 03:09:56.861261 kernel: clk: Disabling unused clocks May 27 03:09:56.861272 kernel: Warning: unable to open an initial console. May 27 03:09:56.861283 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:09:56.861294 kernel: Write protecting the kernel read-only data: 24576k May 27 03:09:56.861308 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:09:56.861319 kernel: Run /init as init process May 27 03:09:56.861330 kernel: with arguments: May 27 03:09:56.861341 kernel: /init May 27 03:09:56.861352 kernel: with environment: May 27 03:09:56.861363 kernel: HOME=/ May 27 03:09:56.861373 kernel: TERM=linux May 27 03:09:56.861384 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:09:56.861397 systemd[1]: Successfully made /usr/ read-only. May 27 03:09:56.861425 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:09:56.861440 systemd[1]: Detected virtualization kvm. May 27 03:09:56.861452 systemd[1]: Detected architecture x86-64. May 27 03:09:56.861463 systemd[1]: Running in initrd. May 27 03:09:56.861475 systemd[1]: No hostname configured, using default hostname. May 27 03:09:56.861490 systemd[1]: Hostname set to . May 27 03:09:56.861502 systemd[1]: Initializing machine ID from VM UUID. May 27 03:09:56.861514 systemd[1]: Queued start job for default target initrd.target. May 27 03:09:56.861526 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:09:56.861539 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:09:56.861552 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:09:56.861564 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:09:56.861577 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:09:56.861592 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:09:56.861616 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:09:56.861628 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:09:56.861641 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:09:56.861653 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:09:56.861665 systemd[1]: Reached target paths.target - Path Units. May 27 03:09:56.861677 systemd[1]: Reached target slices.target - Slice Units. May 27 03:09:56.861692 systemd[1]: Reached target swap.target - Swaps. May 27 03:09:56.861704 systemd[1]: Reached target timers.target - Timer Units. May 27 03:09:56.861730 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:09:56.861743 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:09:56.861755 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:09:56.861768 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:09:56.861780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:09:56.861792 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:09:56.861807 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:09:56.861820 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:09:56.861832 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:09:56.861845 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:09:56.861859 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:09:56.861875 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:09:56.861889 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:09:56.861902 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:09:56.861914 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:09:56.861926 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:09:56.861939 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:09:56.861954 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:09:56.861967 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:09:56.861979 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:09:56.862013 systemd-journald[219]: Collecting audit messages is disabled. May 27 03:09:56.862043 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:09:56.862055 systemd-journald[219]: Journal started May 27 03:09:56.862081 systemd-journald[219]: Runtime Journal (/run/log/journal/44f5fa0728c84e64a0f6a4e6b8b994d9) is 6M, max 48.6M, 42.5M free. May 27 03:09:56.848010 systemd-modules-load[221]: Inserted module 'overlay' May 27 03:09:56.888837 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:09:56.888856 kernel: Bridge firewalling registered May 27 03:09:56.877558 systemd-modules-load[221]: Inserted module 'br_netfilter' May 27 03:09:56.891496 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:09:56.891937 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:09:56.894269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:09:56.899580 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:09:56.902268 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:09:56.905010 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:09:56.911547 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:09:56.920556 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:09:56.923353 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:09:56.925104 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:09:56.930818 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:09:56.932795 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:09:56.937275 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:09:56.940446 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:09:56.962852 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:09:56.984572 systemd-resolved[259]: Positive Trust Anchors: May 27 03:09:56.984589 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:09:56.984641 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:09:56.987601 systemd-resolved[259]: Defaulting to hostname 'linux'. May 27 03:09:56.988818 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:09:56.994958 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:09:57.076757 kernel: SCSI subsystem initialized May 27 03:09:57.085741 kernel: Loading iSCSI transport class v2.0-870. May 27 03:09:57.096745 kernel: iscsi: registered transport (tcp) May 27 03:09:57.119756 kernel: iscsi: registered transport (qla4xxx) May 27 03:09:57.119809 kernel: QLogic iSCSI HBA Driver May 27 03:09:57.140747 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:09:57.170027 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:09:57.173901 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:09:57.227522 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:09:57.229477 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:09:57.289768 kernel: raid6: avx2x4 gen() 27077 MB/s May 27 03:09:57.306774 kernel: raid6: avx2x2 gen() 27117 MB/s May 27 03:09:57.323925 kernel: raid6: avx2x1 gen() 21556 MB/s May 27 03:09:57.324022 kernel: raid6: using algorithm avx2x2 gen() 27117 MB/s May 27 03:09:57.342053 kernel: raid6: .... xor() 17531 MB/s, rmw enabled May 27 03:09:57.342162 kernel: raid6: using avx2x2 recovery algorithm May 27 03:09:57.364768 kernel: xor: automatically using best checksumming function avx May 27 03:09:57.535757 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:09:57.546306 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:09:57.550730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:09:57.585901 systemd-udevd[473]: Using default interface naming scheme 'v255'. May 27 03:09:57.591496 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:09:57.594893 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:09:57.627073 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation May 27 03:09:57.658317 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:09:57.662943 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:09:57.744606 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:09:57.748006 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:09:57.787786 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 27 03:09:57.791834 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 03:09:57.800994 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:09:57.801021 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:09:57.801042 kernel: GPT:9289727 != 19775487 May 27 03:09:57.801052 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:09:57.801062 kernel: GPT:9289727 != 19775487 May 27 03:09:57.801072 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:09:57.801082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:09:57.821524 kernel: libata version 3.00 loaded. May 27 03:09:57.823742 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 03:09:57.825747 kernel: AES CTR mode by8 optimization enabled May 27 03:09:57.838752 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:09:57.838881 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:09:57.847097 kernel: ahci 0000:00:1f.2: version 3.0 May 27 03:09:57.847344 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 03:09:57.847363 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 03:09:57.843629 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:09:57.853571 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 03:09:57.853819 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 03:09:57.853962 kernel: scsi host0: ahci May 27 03:09:57.856662 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:09:57.859544 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:09:57.866743 kernel: scsi host1: ahci May 27 03:09:57.868736 kernel: scsi host2: ahci May 27 03:09:57.870775 kernel: scsi host3: ahci May 27 03:09:57.874825 kernel: scsi host4: ahci May 27 03:09:57.875031 kernel: scsi host5: ahci May 27 03:09:57.881670 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 May 27 03:09:57.881695 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 May 27 03:09:57.881707 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 May 27 03:09:57.881743 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 May 27 03:09:57.881754 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 May 27 03:09:57.881771 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 May 27 03:09:57.888844 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 03:09:57.913858 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 03:09:57.932740 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 03:09:57.932837 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 03:09:57.936670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:09:57.949902 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:09:57.952664 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:09:58.063804 disk-uuid[634]: Primary Header is updated. May 27 03:09:58.063804 disk-uuid[634]: Secondary Entries is updated. May 27 03:09:58.063804 disk-uuid[634]: Secondary Header is updated. May 27 03:09:58.068734 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:09:58.074780 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:09:58.188761 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 03:09:58.188847 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 03:09:58.189747 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 03:09:58.194744 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 03:09:58.194771 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 03:09:58.196019 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 03:09:58.196846 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 03:09:58.196867 kernel: ata3.00: applying bridge limits May 27 03:09:58.198032 kernel: ata3.00: configured for UDMA/100 May 27 03:09:58.198765 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 03:09:58.261751 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 03:09:58.262101 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 03:09:58.278066 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 27 03:09:58.725609 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:09:58.728411 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:09:58.731296 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:09:58.733873 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:09:58.737101 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:09:58.768559 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:09:59.077751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:09:59.078440 disk-uuid[635]: The operation has completed successfully. May 27 03:09:59.118024 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:09:59.118197 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:09:59.159003 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:09:59.187729 sh[663]: Success May 27 03:09:59.206061 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:09:59.206150 kernel: device-mapper: uevent: version 1.0.3 May 27 03:09:59.207254 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:09:59.217767 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 03:09:59.254580 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:09:59.258579 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:09:59.274739 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:09:59.281661 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:09:59.281698 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (675) May 27 03:09:59.285980 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:09:59.286024 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:09:59.286037 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:09:59.291093 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:09:59.292567 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:09:59.294380 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:09:59.295238 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:09:59.297191 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:09:59.326630 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (708) May 27 03:09:59.326702 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:09:59.326732 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:09:59.328146 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:09:59.334762 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:09:59.335946 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:09:59.339125 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:09:59.481978 ignition[751]: Ignition 2.21.0 May 27 03:09:59.481986 ignition[751]: Stage: fetch-offline May 27 03:09:59.482690 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:09:59.482041 ignition[751]: no configs at "/usr/lib/ignition/base.d" May 27 03:09:59.482053 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:09:59.487740 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:09:59.482171 ignition[751]: parsed url from cmdline: "" May 27 03:09:59.482176 ignition[751]: no config URL provided May 27 03:09:59.482182 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:09:59.482193 ignition[751]: no config at "/usr/lib/ignition/user.ign" May 27 03:09:59.482221 ignition[751]: op(1): [started] loading QEMU firmware config module May 27 03:09:59.482228 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 03:09:59.498464 ignition[751]: op(1): [finished] loading QEMU firmware config module May 27 03:09:59.538175 ignition[751]: parsing config with SHA512: 50568a5e9e89a75e5df4232de35b672b78eab531b40051cb1aaf033ab728aaeb0ed8d046f8e6963e48f75b537c7f0fbd549bb60d6f2070ebb4ba141c12ed76b7 May 27 03:09:59.538928 systemd-networkd[853]: lo: Link UP May 27 03:09:59.538937 systemd-networkd[853]: lo: Gained carrier May 27 03:09:59.541485 systemd-networkd[853]: Enumeration completed May 27 03:09:59.544834 ignition[751]: fetch-offline: fetch-offline passed May 27 03:09:59.541630 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:09:59.544887 ignition[751]: Ignition finished successfully May 27 03:09:59.542047 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:09:59.542052 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:09:59.542519 systemd-networkd[853]: eth0: Link UP May 27 03:09:59.542523 systemd-networkd[853]: eth0: Gained carrier May 27 03:09:59.542540 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:09:59.544451 unknown[751]: fetched base config from "system" May 27 03:09:59.544459 unknown[751]: fetched user config from "qemu" May 27 03:09:59.554258 systemd[1]: Reached target network.target - Network. May 27 03:09:59.556806 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:09:59.559017 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 03:09:59.560232 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:09:59.584816 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:09:59.615117 ignition[857]: Ignition 2.21.0 May 27 03:09:59.615612 ignition[857]: Stage: kargs May 27 03:09:59.615820 ignition[857]: no configs at "/usr/lib/ignition/base.d" May 27 03:09:59.615832 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:09:59.616727 ignition[857]: kargs: kargs passed May 27 03:09:59.616778 ignition[857]: Ignition finished successfully May 27 03:09:59.640704 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:09:59.643130 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:09:59.682995 ignition[866]: Ignition 2.21.0 May 27 03:09:59.683018 ignition[866]: Stage: disks May 27 03:09:59.683217 ignition[866]: no configs at "/usr/lib/ignition/base.d" May 27 03:09:59.683231 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:09:59.687605 ignition[866]: disks: disks passed May 27 03:09:59.687739 ignition[866]: Ignition finished successfully May 27 03:09:59.694865 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:09:59.697768 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:09:59.697899 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:09:59.703222 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:09:59.705569 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:09:59.705673 systemd[1]: Reached target basic.target - Basic System. May 27 03:09:59.709391 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:09:59.742958 systemd-resolved[259]: Detected conflict on linux IN A 10.0.0.11 May 27 03:09:59.742975 systemd-resolved[259]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. May 27 03:09:59.744621 systemd-fsck[876]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:09:59.753480 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:09:59.756694 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:09:59.884749 kernel: EXT4-fs (vda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:09:59.885427 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:09:59.886121 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:09:59.890100 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:09:59.891166 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:09:59.892311 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:09:59.892350 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:09:59.892373 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:09:59.909108 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:09:59.910518 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:09:59.917742 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (884) May 27 03:09:59.919933 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:09:59.919956 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:09:59.919967 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:09:59.924579 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:09:59.957871 initrd-setup-root[908]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:09:59.965027 initrd-setup-root[915]: cut: /sysroot/etc/group: No such file or directory May 27 03:09:59.970783 initrd-setup-root[922]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:09:59.976093 initrd-setup-root[929]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:10:00.098284 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:10:00.099830 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:10:00.102564 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:10:00.139804 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:10:00.154439 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:10:00.175590 ignition[999]: INFO : Ignition 2.21.0 May 27 03:10:00.175590 ignition[999]: INFO : Stage: mount May 27 03:10:00.177761 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:10:00.177761 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:10:00.177761 ignition[999]: INFO : mount: mount passed May 27 03:10:00.177761 ignition[999]: INFO : Ignition finished successfully May 27 03:10:00.179512 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:10:00.182774 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:10:00.281458 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:10:00.284098 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:10:00.312748 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1012) May 27 03:10:00.314895 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:10:00.314922 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:10:00.314937 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:10:00.319337 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:10:00.351873 ignition[1029]: INFO : Ignition 2.21.0 May 27 03:10:00.351873 ignition[1029]: INFO : Stage: files May 27 03:10:00.354171 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:10:00.354171 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:10:00.354171 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping May 27 03:10:00.354171 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:10:00.354171 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:10:00.361530 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:10:00.361530 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:10:00.361530 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:10:00.361530 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 03:10:00.361530 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 27 03:10:00.358828 unknown[1029]: wrote ssh authorized keys file for user: core May 27 03:10:00.405866 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:10:00.624988 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 03:10:00.627181 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 03:10:00.627181 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 27 03:10:01.203966 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 03:10:01.463125 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 03:10:01.463125 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 03:10:01.468624 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:10:01.468624 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:10:01.468624 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:10:01.468624 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:10:01.468624 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:10:01.468624 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:10:01.468624 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:10:01.485072 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:10:01.485072 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:10:01.485072 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:10:01.485072 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:10:01.485072 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:10:01.485072 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 27 03:10:01.500849 systemd-networkd[853]: eth0: Gained IPv6LL May 27 03:10:02.167433 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 03:10:03.408589 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:10:03.408589 ignition[1029]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 03:10:03.421737 ignition[1029]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:10:03.571296 ignition[1029]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:10:03.571296 ignition[1029]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 03:10:03.571296 ignition[1029]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 27 03:10:03.571296 ignition[1029]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:10:03.580492 ignition[1029]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:10:03.580492 ignition[1029]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 27 03:10:03.580492 ignition[1029]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 27 03:10:03.603270 ignition[1029]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:10:03.609194 ignition[1029]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:10:03.611302 ignition[1029]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 27 03:10:03.611302 ignition[1029]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 27 03:10:03.611302 ignition[1029]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:10:03.611302 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:10:03.611302 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:10:03.611302 ignition[1029]: INFO : files: files passed May 27 03:10:03.611302 ignition[1029]: INFO : Ignition finished successfully May 27 03:10:03.618497 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:10:03.625070 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:10:03.631691 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:10:03.641065 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:10:03.641230 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:10:03.646584 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory May 27 03:10:03.650233 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:10:03.650233 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:10:03.654927 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:10:03.658877 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:10:03.659222 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:10:03.665697 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:10:03.750446 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:10:03.750600 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:10:03.753456 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:10:03.755825 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:10:03.758165 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:10:03.761277 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:10:03.800583 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:10:03.805863 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:10:03.837395 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:10:03.837708 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:10:03.842578 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:10:03.846919 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:10:03.847091 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:10:03.850403 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:10:03.853082 systemd[1]: Stopped target basic.target - Basic System. May 27 03:10:03.854573 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:10:03.857051 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:10:03.857425 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:10:03.858120 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:10:03.858593 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:10:03.859293 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:10:03.859790 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:10:03.860455 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:10:03.861070 systemd[1]: Stopped target swap.target - Swaps. May 27 03:10:03.861505 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:10:03.861651 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:10:03.880515 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:10:03.883051 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:10:03.885489 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:10:03.887980 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:10:03.888159 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:10:03.888363 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:10:03.894398 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:10:03.894602 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:10:03.896000 systemd[1]: Stopped target paths.target - Path Units. May 27 03:10:03.898481 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:10:03.903898 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:10:03.906809 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:10:03.907957 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:10:03.910061 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:10:03.910204 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:10:03.912358 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:10:03.912504 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:10:03.913480 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:10:03.913664 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:10:03.917334 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:10:03.917506 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:10:03.919627 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:10:03.924518 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:10:03.925549 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:10:03.925739 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:10:03.928075 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:10:03.928214 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:10:03.936983 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:10:03.940134 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:10:03.962153 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:10:03.967094 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:10:03.967260 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:10:03.975801 ignition[1085]: INFO : Ignition 2.21.0 May 27 03:10:03.975801 ignition[1085]: INFO : Stage: umount May 27 03:10:03.975801 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:10:03.975801 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:10:03.980647 ignition[1085]: INFO : umount: umount passed May 27 03:10:03.981687 ignition[1085]: INFO : Ignition finished successfully May 27 03:10:03.984217 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:10:03.984384 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:10:03.986903 systemd[1]: Stopped target network.target - Network. May 27 03:10:03.988738 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:10:03.988815 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:10:03.990110 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:10:03.990180 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:10:03.992499 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:10:03.992575 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:10:03.993519 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:10:03.993583 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:10:03.994007 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:10:03.994073 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:10:03.994476 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:10:03.995061 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:10:04.004123 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:10:04.004294 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:10:04.009191 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:10:04.009598 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:10:04.009666 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:10:04.013388 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:10:04.017351 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:10:04.017528 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:10:04.021390 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:10:04.023007 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:10:04.025215 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:10:04.025284 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:10:04.028596 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:10:04.028698 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:10:04.028785 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:10:04.031752 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:10:04.031820 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:10:04.036890 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:10:04.036969 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:10:04.038105 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:10:04.043952 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:10:04.057863 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:10:04.061940 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:10:04.065257 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:10:04.065407 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:10:04.067302 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:10:04.067391 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:10:04.068597 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:10:04.068647 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:10:04.070921 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:10:04.070990 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:10:04.071757 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:10:04.071824 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:10:04.078769 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:10:04.078864 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:10:04.082657 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:10:04.084196 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:10:04.084250 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:10:04.088013 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:10:04.088064 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:10:04.093200 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 03:10:04.093251 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:10:04.097238 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:10:04.097303 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:10:04.098565 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:10:04.098627 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:10:04.115682 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:10:04.115878 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:10:04.117233 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:10:04.120283 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:10:04.133440 systemd[1]: Switching root. May 27 03:10:04.170305 systemd-journald[219]: Journal stopped May 27 03:10:05.422865 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). May 27 03:10:05.422953 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:10:05.422971 kernel: SELinux: policy capability open_perms=1 May 27 03:10:05.422987 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:10:05.423005 kernel: SELinux: policy capability always_check_network=0 May 27 03:10:05.423020 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:10:05.423039 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:10:05.423053 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:10:05.423068 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:10:05.423082 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:10:05.423096 kernel: audit: type=1403 audit(1748315404.493:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:10:05.423120 systemd[1]: Successfully loaded SELinux policy in 55.805ms. May 27 03:10:05.423156 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.845ms. May 27 03:10:05.423173 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:10:05.423190 systemd[1]: Detected virtualization kvm. May 27 03:10:05.423208 systemd[1]: Detected architecture x86-64. May 27 03:10:05.423224 systemd[1]: Detected first boot. May 27 03:10:05.423240 systemd[1]: Initializing machine ID from VM UUID. May 27 03:10:05.423256 zram_generator::config[1130]: No configuration found. May 27 03:10:05.423278 kernel: Guest personality initialized and is inactive May 27 03:10:05.423299 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:10:05.423314 kernel: Initialized host personality May 27 03:10:05.423330 kernel: NET: Registered PF_VSOCK protocol family May 27 03:10:05.423349 systemd[1]: Populated /etc with preset unit settings. May 27 03:10:05.423365 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:10:05.423380 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:10:05.423410 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:10:05.423426 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:10:05.423442 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:10:05.423459 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:10:05.423474 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:10:05.423491 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:10:05.423516 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:10:05.423532 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:10:05.423547 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:10:05.423559 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:10:05.423572 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:10:05.423584 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:10:05.423596 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:10:05.423611 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:10:05.423627 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:10:05.423640 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:10:05.423652 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:10:05.423664 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:10:05.423677 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:10:05.423695 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:10:05.423707 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:10:05.423840 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:10:05.423858 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:10:05.423870 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:10:05.423882 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:10:05.423895 systemd[1]: Reached target slices.target - Slice Units. May 27 03:10:05.423907 systemd[1]: Reached target swap.target - Swaps. May 27 03:10:05.423919 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:10:05.423932 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:10:05.423944 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:10:05.423958 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:10:05.423973 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:10:05.423985 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:10:05.423998 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:10:05.424010 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:10:05.424022 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:10:05.424035 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:10:05.424047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:10:05.424059 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:10:05.424072 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:10:05.424086 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:10:05.424099 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:10:05.424112 systemd[1]: Reached target machines.target - Containers. May 27 03:10:05.424123 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:10:05.424136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:10:05.424148 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:10:05.424161 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:10:05.424173 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:10:05.424188 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:10:05.424200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:10:05.424212 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:10:05.424224 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:10:05.424236 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:10:05.424248 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:10:05.424261 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:10:05.424278 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:10:05.424294 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:10:05.424315 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:10:05.424331 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:10:05.424347 kernel: loop: module loaded May 27 03:10:05.424363 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:10:05.424380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:10:05.424407 kernel: fuse: init (API version 7.41) May 27 03:10:05.424425 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:10:05.424444 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:10:05.424461 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:10:05.424489 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:10:05.424508 systemd[1]: Stopped verity-setup.service. May 27 03:10:05.424524 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:10:05.424541 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:10:05.424564 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:10:05.424581 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:10:05.424598 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:10:05.424620 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:10:05.424637 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:10:05.424654 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:10:05.424673 kernel: ACPI: bus type drm_connector registered May 27 03:10:05.424690 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:10:05.424706 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:10:05.424740 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:10:05.424757 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:10:05.424774 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:10:05.424828 systemd-journald[1201]: Collecting audit messages is disabled. May 27 03:10:05.424865 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:10:05.424884 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:10:05.424903 systemd-journald[1201]: Journal started May 27 03:10:05.424936 systemd-journald[1201]: Runtime Journal (/run/log/journal/44f5fa0728c84e64a0f6a4e6b8b994d9) is 6M, max 48.6M, 42.5M free. May 27 03:10:05.120201 systemd[1]: Queued start job for default target multi-user.target. May 27 03:10:05.142964 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 03:10:05.143656 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:10:05.427614 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:10:05.427653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:10:05.430754 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:10:05.431946 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:10:05.432208 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:10:05.433635 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:10:05.434010 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:10:05.435444 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:10:05.436948 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:10:05.438539 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:10:05.440118 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:10:05.456579 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:10:05.459793 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:10:05.463971 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:10:05.465460 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:10:05.465500 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:10:05.467838 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:10:05.472345 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:10:05.474611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:10:05.476943 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:10:05.481951 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:10:05.483923 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:10:05.486302 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:10:05.489891 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:10:05.491974 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:10:05.494928 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:10:05.503485 systemd-journald[1201]: Time spent on flushing to /var/log/journal/44f5fa0728c84e64a0f6a4e6b8b994d9 is 17.235ms for 982 entries. May 27 03:10:05.503485 systemd-journald[1201]: System Journal (/var/log/journal/44f5fa0728c84e64a0f6a4e6b8b994d9) is 8M, max 195.6M, 187.6M free. May 27 03:10:05.532494 systemd-journald[1201]: Received client request to flush runtime journal. May 27 03:10:05.532540 kernel: loop0: detected capacity change from 0 to 113872 May 27 03:10:05.499452 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:10:05.503328 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:10:05.506278 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:10:05.508466 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:10:05.530903 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:10:05.533528 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:10:05.538962 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:10:05.541100 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:10:05.543037 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:10:05.552633 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 27 03:10:05.553179 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 27 03:10:05.564067 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:10:05.565406 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:10:05.569151 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:10:05.588597 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:10:05.597741 kernel: loop1: detected capacity change from 0 to 224512 May 27 03:10:05.623303 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:10:05.628762 kernel: loop2: detected capacity change from 0 to 146240 May 27 03:10:05.628885 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:10:05.667417 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 27 03:10:05.667443 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 27 03:10:05.672752 kernel: loop3: detected capacity change from 0 to 113872 May 27 03:10:05.676084 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:10:05.687753 kernel: loop4: detected capacity change from 0 to 224512 May 27 03:10:05.699777 kernel: loop5: detected capacity change from 0 to 146240 May 27 03:10:05.711581 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 03:10:05.712271 (sd-merge)[1273]: Merged extensions into '/usr'. May 27 03:10:05.718096 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:10:05.718114 systemd[1]: Reloading... May 27 03:10:05.779788 zram_generator::config[1299]: No configuration found. May 27 03:10:05.892056 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:10:05.922506 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:10:06.013331 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:10:06.013907 systemd[1]: Reloading finished in 295 ms. May 27 03:10:06.052440 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:10:06.054569 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:10:06.075912 systemd[1]: Starting ensure-sysext.service... May 27 03:10:06.078425 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:10:06.089602 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... May 27 03:10:06.089625 systemd[1]: Reloading... May 27 03:10:06.103110 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:10:06.103149 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:10:06.103464 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:10:06.103708 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:10:06.104839 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:10:06.105184 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. May 27 03:10:06.105260 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. May 27 03:10:06.116131 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:10:06.116150 systemd-tmpfiles[1338]: Skipping /boot May 27 03:10:06.135004 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:10:06.135153 systemd-tmpfiles[1338]: Skipping /boot May 27 03:10:06.156743 zram_generator::config[1365]: No configuration found. May 27 03:10:06.263341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:10:06.348709 systemd[1]: Reloading finished in 258 ms. May 27 03:10:06.371971 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:10:06.401834 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:10:06.411291 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:10:06.413896 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:10:06.432512 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:10:06.436259 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:10:06.441534 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:10:06.444799 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:10:06.449020 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:10:06.449335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:10:06.451063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:10:06.454374 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:10:06.457176 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:10:06.459298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:10:06.459480 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:10:06.469025 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:10:06.470337 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:10:06.471857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:10:06.472083 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:10:06.473878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:10:06.474094 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:10:06.481299 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:10:06.485579 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:10:06.485859 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:10:06.496494 systemd-udevd[1409]: Using default interface naming scheme 'v255'. May 27 03:10:06.498071 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:10:06.506273 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:10:06.506511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:10:06.511020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:10:06.513187 augenrules[1439]: No rules May 27 03:10:06.517833 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:10:06.520917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:10:06.523763 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:10:06.525274 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:10:06.525515 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:10:06.536977 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:10:06.538314 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:10:06.540086 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:10:06.541602 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:10:06.543613 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:10:06.545522 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:10:06.548586 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:10:06.550489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:10:06.550868 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:10:06.553244 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:10:06.553481 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:10:06.555128 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:10:06.555391 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:10:06.557305 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:10:06.557591 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:10:06.571926 systemd[1]: Finished ensure-sysext.service. May 27 03:10:06.587334 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:10:06.596493 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:10:06.597568 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:10:06.597639 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:10:06.599859 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 03:10:06.601049 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:10:06.642803 systemd-resolved[1407]: Positive Trust Anchors: May 27 03:10:06.642826 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:10:06.642857 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:10:06.648623 systemd-resolved[1407]: Defaulting to hostname 'linux'. May 27 03:10:06.653267 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:10:06.655958 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:10:06.676707 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:10:06.723748 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:10:06.734768 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 03:10:06.741748 kernel: ACPI: button: Power Button [PWRF] May 27 03:10:06.748077 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:10:06.755836 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 03:10:06.756242 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 03:10:06.751569 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:10:06.787859 systemd-networkd[1488]: lo: Link UP May 27 03:10:06.787909 systemd-networkd[1488]: lo: Gained carrier May 27 03:10:06.793282 systemd-networkd[1488]: Enumeration completed May 27 03:10:06.793425 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:10:06.795000 systemd[1]: Reached target network.target - Network. May 27 03:10:06.796612 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:10:06.796628 systemd-networkd[1488]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:10:06.797412 systemd-networkd[1488]: eth0: Link UP May 27 03:10:06.797627 systemd-networkd[1488]: eth0: Gained carrier May 27 03:10:06.797654 systemd-networkd[1488]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:10:06.800863 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:10:06.804962 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:10:06.809790 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:10:06.816987 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 03:10:06.818330 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:10:06.819513 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:10:06.820770 systemd-networkd[1488]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:10:06.820803 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:10:06.822077 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:10:06.823237 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:10:06.824506 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:10:06.824537 systemd[1]: Reached target paths.target - Path Units. May 27 03:10:06.825810 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:10:06.827127 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:10:06.828779 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:10:06.830051 systemd[1]: Reached target timers.target - Timer Units. May 27 03:10:06.831927 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:10:06.833258 systemd-timesyncd[1489]: Network configuration changed, trying to establish connection. May 27 03:10:06.834772 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:10:06.836299 systemd-timesyncd[1489]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 03:10:06.836354 systemd-timesyncd[1489]: Initial clock synchronization to Tue 2025-05-27 03:10:07.201986 UTC. May 27 03:10:06.839397 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:10:06.841879 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:10:06.843169 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:10:06.846833 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:10:06.848572 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:10:06.852838 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:10:06.861076 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:10:06.862290 systemd[1]: Reached target basic.target - Basic System. May 27 03:10:06.863582 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:10:06.863680 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:10:06.865289 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:10:06.867971 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:10:06.872437 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:10:06.879921 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:10:06.882489 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:10:06.885798 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:10:06.887868 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:10:06.898609 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:10:06.901449 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:10:06.905267 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:10:06.909034 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:10:06.913588 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing passwd entry cache May 27 03:10:06.913606 oslogin_cache_refresh[1530]: Refreshing passwd entry cache May 27 03:10:06.916155 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:10:06.917012 jq[1528]: false May 27 03:10:06.918795 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:10:06.923119 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:10:06.925914 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:10:06.931766 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting users, quitting May 27 03:10:06.931766 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:10:06.931766 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing group entry cache May 27 03:10:06.931401 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:10:06.930490 oslogin_cache_refresh[1530]: Failure getting users, quitting May 27 03:10:06.930512 oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:10:06.930568 oslogin_cache_refresh[1530]: Refreshing group entry cache May 27 03:10:06.934277 extend-filesystems[1529]: Found loop3 May 27 03:10:06.935393 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:10:06.935522 extend-filesystems[1529]: Found loop4 May 27 03:10:06.937525 extend-filesystems[1529]: Found loop5 May 27 03:10:06.937525 extend-filesystems[1529]: Found sr0 May 27 03:10:06.937525 extend-filesystems[1529]: Found vda May 27 03:10:06.937525 extend-filesystems[1529]: Found vda1 May 27 03:10:06.937525 extend-filesystems[1529]: Found vda2 May 27 03:10:06.937525 extend-filesystems[1529]: Found vda3 May 27 03:10:06.937525 extend-filesystems[1529]: Found usr May 27 03:10:06.937525 extend-filesystems[1529]: Found vda4 May 27 03:10:06.937525 extend-filesystems[1529]: Found vda6 May 27 03:10:06.937525 extend-filesystems[1529]: Found vda7 May 27 03:10:06.937525 extend-filesystems[1529]: Found vda9 May 27 03:10:06.937525 extend-filesystems[1529]: Checking size of /dev/vda9 May 27 03:10:06.949389 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting groups, quitting May 27 03:10:06.949389 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:10:06.942338 oslogin_cache_refresh[1530]: Failure getting groups, quitting May 27 03:10:06.944848 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:10:06.942363 oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:10:06.945932 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:10:06.946187 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:10:06.946649 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:10:06.947297 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:10:06.951375 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:10:06.951704 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:10:06.954310 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:10:06.954577 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:10:06.955695 jq[1543]: true May 27 03:10:06.986942 extend-filesystems[1529]: Resized partition /dev/vda9 May 27 03:10:06.989843 extend-filesystems[1565]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:10:06.991936 update_engine[1540]: I20250527 03:10:06.991863 1540 main.cc:92] Flatcar Update Engine starting May 27 03:10:06.993733 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 03:10:07.008657 (ntainerd)[1550]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:10:07.013861 jq[1551]: true May 27 03:10:07.014387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:10:07.029774 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 03:10:07.062637 tar[1549]: linux-amd64/LICENSE May 27 03:10:07.064968 tar[1549]: linux-amd64/helm May 27 03:10:07.066566 extend-filesystems[1565]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 03:10:07.066566 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 03:10:07.066566 extend-filesystems[1565]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 03:10:07.071284 extend-filesystems[1529]: Resized filesystem in /dev/vda9 May 27 03:10:07.069450 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:10:07.069743 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:10:07.074320 dbus-daemon[1526]: [system] SELinux support is enabled May 27 03:10:07.074552 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:10:07.079049 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:10:07.079078 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:10:07.080461 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:10:07.080478 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:10:07.086400 systemd[1]: Started update-engine.service - Update Engine. May 27 03:10:07.087100 update_engine[1540]: I20250527 03:10:07.086678 1540 update_check_scheduler.cc:74] Next update check in 9m38s May 27 03:10:07.097393 kernel: kvm_amd: TSC scaling supported May 27 03:10:07.097470 kernel: kvm_amd: Nested Virtualization enabled May 27 03:10:07.097492 kernel: kvm_amd: Nested Paging enabled May 27 03:10:07.097528 kernel: kvm_amd: LBR virtualization supported May 27 03:10:07.098954 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 27 03:10:07.098984 kernel: kvm_amd: Virtual GIF supported May 27 03:10:07.102007 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:10:07.120898 systemd-logind[1537]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:10:07.121268 systemd-logind[1537]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:10:07.134998 systemd-logind[1537]: New seat seat0. May 27 03:10:07.138900 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:10:07.150434 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:10:07.154694 bash[1590]: Updated "/home/core/.ssh/authorized_keys" May 27 03:10:07.159613 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:10:07.160514 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 03:10:07.196004 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:10:07.201692 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:10:07.211416 kernel: EDAC MC: Ver: 3.0.0 May 27 03:10:07.218275 locksmithd[1584]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:10:07.222051 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:10:07.222540 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:10:07.226986 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:10:07.251294 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:10:07.287502 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:10:07.302578 containerd[1550]: time="2025-05-27T03:10:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:10:07.305068 containerd[1550]: time="2025-05-27T03:10:07.305020769Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:10:07.316290 containerd[1550]: time="2025-05-27T03:10:07.316121318Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.836µs" May 27 03:10:07.316290 containerd[1550]: time="2025-05-27T03:10:07.316195148Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:10:07.316290 containerd[1550]: time="2025-05-27T03:10:07.316219758Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:10:07.317776 containerd[1550]: time="2025-05-27T03:10:07.316489276Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:10:07.317776 containerd[1550]: time="2025-05-27T03:10:07.316513309Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:10:07.317776 containerd[1550]: time="2025-05-27T03:10:07.316547820Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:10:07.317776 containerd[1550]: time="2025-05-27T03:10:07.316702333Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:10:07.317776 containerd[1550]: time="2025-05-27T03:10:07.316718844Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:10:07.317776 containerd[1550]: time="2025-05-27T03:10:07.317140014Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:10:07.317776 containerd[1550]: time="2025-05-27T03:10:07.317158453Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:10:07.317776 containerd[1550]: time="2025-05-27T03:10:07.317171759Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:10:07.317776 containerd[1550]: time="2025-05-27T03:10:07.317194273Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:10:07.317776 containerd[1550]: time="2025-05-27T03:10:07.317316475Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:10:07.317776 containerd[1550]: time="2025-05-27T03:10:07.317623971Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:10:07.318060 containerd[1550]: time="2025-05-27T03:10:07.317663395Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:10:07.318060 containerd[1550]: time="2025-05-27T03:10:07.317676354Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:10:07.318060 containerd[1550]: time="2025-05-27T03:10:07.317718618Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:10:07.318060 containerd[1550]: time="2025-05-27T03:10:07.318035773Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:10:07.318153 containerd[1550]: time="2025-05-27T03:10:07.318123884Z" level=info msg="metadata content store policy set" policy=shared May 27 03:10:07.323726 containerd[1550]: time="2025-05-27T03:10:07.323628263Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:10:07.323872 containerd[1550]: time="2025-05-27T03:10:07.323790393Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:10:07.323872 containerd[1550]: time="2025-05-27T03:10:07.323838922Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:10:07.323872 containerd[1550]: time="2025-05-27T03:10:07.323857968Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:10:07.323955 containerd[1550]: time="2025-05-27T03:10:07.323882348Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:10:07.323955 containerd[1550]: time="2025-05-27T03:10:07.323922988Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:10:07.323955 containerd[1550]: time="2025-05-27T03:10:07.323942202Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:10:07.324045 containerd[1550]: time="2025-05-27T03:10:07.323957771Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:10:07.324045 containerd[1550]: time="2025-05-27T03:10:07.323974869Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:10:07.324045 containerd[1550]: time="2025-05-27T03:10:07.324005357Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:10:07.324045 containerd[1550]: time="2025-05-27T03:10:07.324020171Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:10:07.324045 containerd[1550]: time="2025-05-27T03:10:07.324039879Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:10:07.324288 containerd[1550]: time="2025-05-27T03:10:07.324255041Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:10:07.324332 containerd[1550]: time="2025-05-27T03:10:07.324295482Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:10:07.324332 containerd[1550]: time="2025-05-27T03:10:07.324317452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:10:07.324396 containerd[1550]: time="2025-05-27T03:10:07.324358720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:10:07.324396 containerd[1550]: time="2025-05-27T03:10:07.324375850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:10:07.324396 containerd[1550]: time="2025-05-27T03:10:07.324391168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:10:07.324487 containerd[1550]: time="2025-05-27T03:10:07.324408244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:10:07.324487 containerd[1550]: time="2025-05-27T03:10:07.324424441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:10:07.324487 containerd[1550]: time="2025-05-27T03:10:07.324440231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:10:07.324487 containerd[1550]: time="2025-05-27T03:10:07.324455998Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:10:07.324487 containerd[1550]: time="2025-05-27T03:10:07.324471744Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:10:07.324638 containerd[1550]: time="2025-05-27T03:10:07.324588373Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:10:07.324638 containerd[1550]: time="2025-05-27T03:10:07.324608981Z" level=info msg="Start snapshots syncer" May 27 03:10:07.324701 containerd[1550]: time="2025-05-27T03:10:07.324653969Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:10:07.325311 containerd[1550]: time="2025-05-27T03:10:07.325208299Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:10:07.325458 containerd[1550]: time="2025-05-27T03:10:07.325337845Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:10:07.326658 containerd[1550]: time="2025-05-27T03:10:07.326621322Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:10:07.326847 containerd[1550]: time="2025-05-27T03:10:07.326814473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:10:07.326906 containerd[1550]: time="2025-05-27T03:10:07.326854473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:10:07.326906 containerd[1550]: time="2025-05-27T03:10:07.326872096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:10:07.326906 containerd[1550]: time="2025-05-27T03:10:07.326887183Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:10:07.326906 containerd[1550]: time="2025-05-27T03:10:07.326902626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:10:07.327019 containerd[1550]: time="2025-05-27T03:10:07.326919095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:10:07.327019 containerd[1550]: time="2025-05-27T03:10:07.326934004Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:10:07.327019 containerd[1550]: time="2025-05-27T03:10:07.326965162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:10:07.327019 containerd[1550]: time="2025-05-27T03:10:07.326980468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:10:07.327019 containerd[1550]: time="2025-05-27T03:10:07.326995838Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:10:07.328302 containerd[1550]: time="2025-05-27T03:10:07.328205023Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:10:07.328302 containerd[1550]: time="2025-05-27T03:10:07.328266167Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:10:07.328302 containerd[1550]: time="2025-05-27T03:10:07.328283044Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:10:07.328302 containerd[1550]: time="2025-05-27T03:10:07.328298100Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:10:07.328450 containerd[1550]: time="2025-05-27T03:10:07.328310913Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:10:07.328450 containerd[1550]: time="2025-05-27T03:10:07.328358771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:10:07.328450 containerd[1550]: time="2025-05-27T03:10:07.328377797Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:10:07.328450 containerd[1550]: time="2025-05-27T03:10:07.328403141Z" level=info msg="runtime interface created" May 27 03:10:07.328450 containerd[1550]: time="2025-05-27T03:10:07.328411417Z" level=info msg="created NRI interface" May 27 03:10:07.328450 containerd[1550]: time="2025-05-27T03:10:07.328446777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:10:07.328606 containerd[1550]: time="2025-05-27T03:10:07.328463162Z" level=info msg="Connect containerd service" May 27 03:10:07.328606 containerd[1550]: time="2025-05-27T03:10:07.328497736Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:10:07.331237 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:10:07.333521 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:10:07.333854 containerd[1550]: time="2025-05-27T03:10:07.333638140Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:10:07.374217 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:10:07.452658 containerd[1550]: time="2025-05-27T03:10:07.452432768Z" level=info msg="Start subscribing containerd event" May 27 03:10:07.452658 containerd[1550]: time="2025-05-27T03:10:07.452503487Z" level=info msg="Start recovering state" May 27 03:10:07.452658 containerd[1550]: time="2025-05-27T03:10:07.452629847Z" level=info msg="Start event monitor" May 27 03:10:07.452658 containerd[1550]: time="2025-05-27T03:10:07.452650896Z" level=info msg="Start cni network conf syncer for default" May 27 03:10:07.452658 containerd[1550]: time="2025-05-27T03:10:07.452660786Z" level=info msg="Start streaming server" May 27 03:10:07.452922 containerd[1550]: time="2025-05-27T03:10:07.452671892Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:10:07.452922 containerd[1550]: time="2025-05-27T03:10:07.452705176Z" level=info msg="runtime interface starting up..." May 27 03:10:07.452922 containerd[1550]: time="2025-05-27T03:10:07.452739027Z" level=info msg="starting plugins..." May 27 03:10:07.452922 containerd[1550]: time="2025-05-27T03:10:07.452758399Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:10:07.452922 containerd[1550]: time="2025-05-27T03:10:07.452632142Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:10:07.453017 containerd[1550]: time="2025-05-27T03:10:07.453002834Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:10:07.453337 containerd[1550]: time="2025-05-27T03:10:07.453082730Z" level=info msg="containerd successfully booted in 0.151164s" May 27 03:10:07.453214 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:10:07.610837 tar[1549]: linux-amd64/README.md May 27 03:10:07.641228 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:10:08.093463 systemd-networkd[1488]: eth0: Gained IPv6LL May 27 03:10:08.099614 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:10:08.101983 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:10:08.104906 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 03:10:08.107717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:10:08.110560 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:10:08.149779 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:10:08.176885 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 03:10:08.177294 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 03:10:08.179558 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:10:09.215039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:10:09.216886 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:10:09.218873 systemd[1]: Startup finished in 3.036s (kernel) + 7.838s (initrd) + 4.779s (userspace) = 15.654s. May 27 03:10:09.253182 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:10:09.797347 kubelet[1665]: E0527 03:10:09.797216 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:10:09.801392 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:10:09.801645 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:10:09.802213 systemd[1]: kubelet.service: Consumed 1.340s CPU time, 265.8M memory peak. May 27 03:10:10.430347 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:10:10.432164 systemd[1]: Started sshd@0-10.0.0.11:22-10.0.0.1:35736.service - OpenSSH per-connection server daemon (10.0.0.1:35736). May 27 03:10:10.498969 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 35736 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:10:10.501031 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:10:10.509306 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:10:10.510811 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:10:10.518899 systemd-logind[1537]: New session 1 of user core. May 27 03:10:10.541036 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:10:10.545044 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:10:10.558400 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:10:10.560835 systemd-logind[1537]: New session c1 of user core. May 27 03:10:10.755963 systemd[1683]: Queued start job for default target default.target. May 27 03:10:10.767056 systemd[1683]: Created slice app.slice - User Application Slice. May 27 03:10:10.767083 systemd[1683]: Reached target paths.target - Paths. May 27 03:10:10.767128 systemd[1683]: Reached target timers.target - Timers. May 27 03:10:10.768782 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:10:10.780343 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:10:10.780473 systemd[1683]: Reached target sockets.target - Sockets. May 27 03:10:10.780520 systemd[1683]: Reached target basic.target - Basic System. May 27 03:10:10.780562 systemd[1683]: Reached target default.target - Main User Target. May 27 03:10:10.780594 systemd[1683]: Startup finished in 212ms. May 27 03:10:10.781113 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:10:10.790897 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:10:10.862461 systemd[1]: Started sshd@1-10.0.0.11:22-10.0.0.1:35750.service - OpenSSH per-connection server daemon (10.0.0.1:35750). May 27 03:10:10.918699 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 35750 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:10:10.920544 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:10:10.925334 systemd-logind[1537]: New session 2 of user core. May 27 03:10:10.935011 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:10:10.991773 sshd[1696]: Connection closed by 10.0.0.1 port 35750 May 27 03:10:10.992213 sshd-session[1694]: pam_unix(sshd:session): session closed for user core May 27 03:10:11.001449 systemd[1]: sshd@1-10.0.0.11:22-10.0.0.1:35750.service: Deactivated successfully. May 27 03:10:11.003267 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:10:11.004127 systemd-logind[1537]: Session 2 logged out. Waiting for processes to exit. May 27 03:10:11.007159 systemd[1]: Started sshd@2-10.0.0.11:22-10.0.0.1:35756.service - OpenSSH per-connection server daemon (10.0.0.1:35756). May 27 03:10:11.007755 systemd-logind[1537]: Removed session 2. May 27 03:10:11.069906 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 35756 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:10:11.071519 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:10:11.076718 systemd-logind[1537]: New session 3 of user core. May 27 03:10:11.090971 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:10:11.143697 sshd[1704]: Connection closed by 10.0.0.1 port 35756 May 27 03:10:11.144044 sshd-session[1702]: pam_unix(sshd:session): session closed for user core May 27 03:10:11.153848 systemd[1]: sshd@2-10.0.0.11:22-10.0.0.1:35756.service: Deactivated successfully. May 27 03:10:11.155933 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:10:11.156975 systemd-logind[1537]: Session 3 logged out. Waiting for processes to exit. May 27 03:10:11.159891 systemd[1]: Started sshd@3-10.0.0.11:22-10.0.0.1:35760.service - OpenSSH per-connection server daemon (10.0.0.1:35760). May 27 03:10:11.160707 systemd-logind[1537]: Removed session 3. May 27 03:10:11.223373 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 35760 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:10:11.225024 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:10:11.229814 systemd-logind[1537]: New session 4 of user core. May 27 03:10:11.239502 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:10:11.297522 sshd[1712]: Connection closed by 10.0.0.1 port 35760 May 27 03:10:11.297768 sshd-session[1710]: pam_unix(sshd:session): session closed for user core May 27 03:10:11.321696 systemd[1]: sshd@3-10.0.0.11:22-10.0.0.1:35760.service: Deactivated successfully. May 27 03:10:11.323584 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:10:11.324407 systemd-logind[1537]: Session 4 logged out. Waiting for processes to exit. May 27 03:10:11.327292 systemd[1]: Started sshd@4-10.0.0.11:22-10.0.0.1:35774.service - OpenSSH per-connection server daemon (10.0.0.1:35774). May 27 03:10:11.328106 systemd-logind[1537]: Removed session 4. May 27 03:10:11.380715 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 35774 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:10:11.382172 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:10:11.387654 systemd-logind[1537]: New session 5 of user core. May 27 03:10:11.400925 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:10:11.463510 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:10:11.463866 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:10:11.476793 sudo[1722]: pam_unix(sudo:session): session closed for user root May 27 03:10:11.478483 sshd[1721]: Connection closed by 10.0.0.1 port 35774 May 27 03:10:11.478826 sshd-session[1718]: pam_unix(sshd:session): session closed for user core May 27 03:10:11.491346 systemd[1]: sshd@4-10.0.0.11:22-10.0.0.1:35774.service: Deactivated successfully. May 27 03:10:11.493204 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:10:11.493936 systemd-logind[1537]: Session 5 logged out. Waiting for processes to exit. May 27 03:10:11.497237 systemd[1]: Started sshd@5-10.0.0.11:22-10.0.0.1:35776.service - OpenSSH per-connection server daemon (10.0.0.1:35776). May 27 03:10:11.497908 systemd-logind[1537]: Removed session 5. May 27 03:10:11.544264 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 35776 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:10:11.546038 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:10:11.551003 systemd-logind[1537]: New session 6 of user core. May 27 03:10:11.560936 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:10:11.616894 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:10:11.617204 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:10:11.624627 sudo[1732]: pam_unix(sudo:session): session closed for user root May 27 03:10:11.630554 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:10:11.630892 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:10:11.641346 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:10:11.685146 augenrules[1754]: No rules May 27 03:10:11.687049 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:10:11.687345 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:10:11.688569 sudo[1731]: pam_unix(sudo:session): session closed for user root May 27 03:10:11.690113 sshd[1730]: Connection closed by 10.0.0.1 port 35776 May 27 03:10:11.690518 sshd-session[1728]: pam_unix(sshd:session): session closed for user core May 27 03:10:11.699587 systemd[1]: sshd@5-10.0.0.11:22-10.0.0.1:35776.service: Deactivated successfully. May 27 03:10:11.701315 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:10:11.702156 systemd-logind[1537]: Session 6 logged out. Waiting for processes to exit. May 27 03:10:11.705273 systemd[1]: Started sshd@6-10.0.0.11:22-10.0.0.1:35790.service - OpenSSH per-connection server daemon (10.0.0.1:35790). May 27 03:10:11.705989 systemd-logind[1537]: Removed session 6. May 27 03:10:11.763294 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 35790 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:10:11.765272 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:10:11.770058 systemd-logind[1537]: New session 7 of user core. May 27 03:10:11.779886 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:10:11.835775 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:10:11.836107 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:10:12.246425 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:10:12.268281 (dockerd)[1787]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:10:12.655567 dockerd[1787]: time="2025-05-27T03:10:12.655426373Z" level=info msg="Starting up" May 27 03:10:12.656200 dockerd[1787]: time="2025-05-27T03:10:12.656163986Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:10:13.149568 dockerd[1787]: time="2025-05-27T03:10:13.149436359Z" level=info msg="Loading containers: start." May 27 03:10:13.161772 kernel: Initializing XFRM netlink socket May 27 03:10:13.429099 systemd-networkd[1488]: docker0: Link UP May 27 03:10:13.435379 dockerd[1787]: time="2025-05-27T03:10:13.435321894Z" level=info msg="Loading containers: done." May 27 03:10:13.449812 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3436521101-merged.mount: Deactivated successfully. May 27 03:10:13.451334 dockerd[1787]: time="2025-05-27T03:10:13.451271484Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:10:13.451501 dockerd[1787]: time="2025-05-27T03:10:13.451380732Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:10:13.451534 dockerd[1787]: time="2025-05-27T03:10:13.451513055Z" level=info msg="Initializing buildkit" May 27 03:10:13.484960 dockerd[1787]: time="2025-05-27T03:10:13.484903622Z" level=info msg="Completed buildkit initialization" May 27 03:10:13.491379 dockerd[1787]: time="2025-05-27T03:10:13.491331589Z" level=info msg="Daemon has completed initialization" May 27 03:10:13.491508 dockerd[1787]: time="2025-05-27T03:10:13.491422105Z" level=info msg="API listen on /run/docker.sock" May 27 03:10:13.491684 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:10:14.708236 containerd[1550]: time="2025-05-27T03:10:14.708180327Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 03:10:15.438548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount757614396.mount: Deactivated successfully. May 27 03:10:17.221906 containerd[1550]: time="2025-05-27T03:10:17.221826595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:17.222799 containerd[1550]: time="2025-05-27T03:10:17.222772682Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 27 03:10:17.224131 containerd[1550]: time="2025-05-27T03:10:17.224090627Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:17.227434 containerd[1550]: time="2025-05-27T03:10:17.227362720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:17.228607 containerd[1550]: time="2025-05-27T03:10:17.228561884Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 2.520338916s" May 27 03:10:17.228676 containerd[1550]: time="2025-05-27T03:10:17.228607136Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 27 03:10:17.229505 containerd[1550]: time="2025-05-27T03:10:17.229471013Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 03:10:18.983574 containerd[1550]: time="2025-05-27T03:10:18.983480496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:18.984782 containerd[1550]: time="2025-05-27T03:10:18.984694260Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 27 03:10:18.987148 containerd[1550]: time="2025-05-27T03:10:18.987107977Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:18.990289 containerd[1550]: time="2025-05-27T03:10:18.990225663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:18.991238 containerd[1550]: time="2025-05-27T03:10:18.991088511Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.761572433s" May 27 03:10:18.991238 containerd[1550]: time="2025-05-27T03:10:18.991127358Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 27 03:10:18.991874 containerd[1550]: time="2025-05-27T03:10:18.991844143Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 03:10:20.052052 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:10:20.054402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:10:20.363741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:10:20.369656 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:10:21.444744 kubelet[2070]: E0527 03:10:21.444625 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:10:21.451586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:10:21.451841 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:10:21.452238 systemd[1]: kubelet.service: Consumed 247ms CPU time, 110.6M memory peak. May 27 03:10:21.784340 containerd[1550]: time="2025-05-27T03:10:21.784170800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:21.785291 containerd[1550]: time="2025-05-27T03:10:21.785231573Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 27 03:10:21.786588 containerd[1550]: time="2025-05-27T03:10:21.786528568Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:21.789086 containerd[1550]: time="2025-05-27T03:10:21.789034327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:21.790170 containerd[1550]: time="2025-05-27T03:10:21.790124470Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 2.798243297s" May 27 03:10:21.790170 containerd[1550]: time="2025-05-27T03:10:21.790162094Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 27 03:10:21.790709 containerd[1550]: time="2025-05-27T03:10:21.790678279Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 03:10:23.157821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488874136.mount: Deactivated successfully. May 27 03:10:23.739549 containerd[1550]: time="2025-05-27T03:10:23.739463451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:23.740972 containerd[1550]: time="2025-05-27T03:10:23.740916916Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 27 03:10:23.742252 containerd[1550]: time="2025-05-27T03:10:23.742210494Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:23.744307 containerd[1550]: time="2025-05-27T03:10:23.744231363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:23.744591 containerd[1550]: time="2025-05-27T03:10:23.744547934Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.953842548s" May 27 03:10:23.744591 containerd[1550]: time="2025-05-27T03:10:23.744576269Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 27 03:10:23.745248 containerd[1550]: time="2025-05-27T03:10:23.745214626Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 03:10:24.291892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017408939.mount: Deactivated successfully. May 27 03:10:25.113641 containerd[1550]: time="2025-05-27T03:10:25.113555248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:25.115226 containerd[1550]: time="2025-05-27T03:10:25.115190039Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 27 03:10:25.116486 containerd[1550]: time="2025-05-27T03:10:25.116432685Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:25.119520 containerd[1550]: time="2025-05-27T03:10:25.119455773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:25.120574 containerd[1550]: time="2025-05-27T03:10:25.120528964Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.375286408s" May 27 03:10:25.120574 containerd[1550]: time="2025-05-27T03:10:25.120563632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 03:10:25.121269 containerd[1550]: time="2025-05-27T03:10:25.121219104Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:10:26.060788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4232780029.mount: Deactivated successfully. May 27 03:10:26.222191 containerd[1550]: time="2025-05-27T03:10:26.222120083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:10:26.244579 containerd[1550]: time="2025-05-27T03:10:26.244549987Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 03:10:26.250440 containerd[1550]: time="2025-05-27T03:10:26.250377765Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:10:26.259129 containerd[1550]: time="2025-05-27T03:10:26.259057540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:10:26.259619 containerd[1550]: time="2025-05-27T03:10:26.259584171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.138327245s" May 27 03:10:26.259619 containerd[1550]: time="2025-05-27T03:10:26.259613029Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:10:26.260242 containerd[1550]: time="2025-05-27T03:10:26.260203953Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 03:10:26.877168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321047176.mount: Deactivated successfully. May 27 03:10:28.691963 containerd[1550]: time="2025-05-27T03:10:28.691773330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:28.692554 containerd[1550]: time="2025-05-27T03:10:28.692496068Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 27 03:10:28.694000 containerd[1550]: time="2025-05-27T03:10:28.693952251Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:28.696615 containerd[1550]: time="2025-05-27T03:10:28.696581970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:28.697776 containerd[1550]: time="2025-05-27T03:10:28.697735502Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.437476118s" May 27 03:10:28.697848 containerd[1550]: time="2025-05-27T03:10:28.697774362Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 03:10:30.997466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:10:30.997650 systemd[1]: kubelet.service: Consumed 247ms CPU time, 110.6M memory peak. May 27 03:10:31.000070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:10:31.024010 systemd[1]: Reload requested from client PID 2226 ('systemctl') (unit session-7.scope)... May 27 03:10:31.024026 systemd[1]: Reloading... May 27 03:10:31.124885 zram_generator::config[2274]: No configuration found. May 27 03:10:31.505142 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:10:31.645138 systemd[1]: Reloading finished in 620 ms. May 27 03:10:31.720784 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:10:31.720936 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:10:31.721357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:10:31.721419 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.3M memory peak. May 27 03:10:31.723675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:10:31.939345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:10:31.959229 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:10:32.006482 kubelet[2316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:10:32.006482 kubelet[2316]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:10:32.006482 kubelet[2316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:10:32.006989 kubelet[2316]: I0527 03:10:32.006605 2316 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:10:32.394149 kubelet[2316]: I0527 03:10:32.394014 2316 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:10:32.394149 kubelet[2316]: I0527 03:10:32.394042 2316 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:10:32.394353 kubelet[2316]: I0527 03:10:32.394326 2316 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:10:32.430523 kubelet[2316]: E0527 03:10:32.430452 2316 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" May 27 03:10:32.431592 kubelet[2316]: I0527 03:10:32.431550 2316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:10:32.440457 kubelet[2316]: I0527 03:10:32.440404 2316 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:10:32.447390 kubelet[2316]: I0527 03:10:32.447330 2316 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:10:32.447694 kubelet[2316]: I0527 03:10:32.447646 2316 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:10:32.447875 kubelet[2316]: I0527 03:10:32.447681 2316 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:10:32.448562 kubelet[2316]: I0527 03:10:32.448535 2316 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:10:32.448562 kubelet[2316]: I0527 03:10:32.448552 2316 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:10:32.448721 kubelet[2316]: I0527 03:10:32.448698 2316 state_mem.go:36] "Initialized new in-memory state store" May 27 03:10:32.451563 kubelet[2316]: I0527 03:10:32.451534 2316 kubelet.go:446] "Attempting to sync node with API server" May 27 03:10:32.451603 kubelet[2316]: I0527 03:10:32.451564 2316 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:10:32.451603 kubelet[2316]: I0527 03:10:32.451587 2316 kubelet.go:352] "Adding apiserver pod source" May 27 03:10:32.451603 kubelet[2316]: I0527 03:10:32.451599 2316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:10:32.454440 kubelet[2316]: I0527 03:10:32.454415 2316 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:10:32.454821 kubelet[2316]: I0527 03:10:32.454799 2316 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:10:32.454880 kubelet[2316]: W0527 03:10:32.454852 2316 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:10:32.459112 kubelet[2316]: W0527 03:10:32.458896 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused May 27 03:10:32.459112 kubelet[2316]: E0527 03:10:32.458989 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" May 27 03:10:32.459112 kubelet[2316]: W0527 03:10:32.458899 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused May 27 03:10:32.459112 kubelet[2316]: E0527 03:10:32.459029 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" May 27 03:10:32.460356 kubelet[2316]: I0527 03:10:32.460289 2316 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:10:32.460356 kubelet[2316]: I0527 03:10:32.460361 2316 server.go:1287] "Started kubelet" May 27 03:10:32.461247 kubelet[2316]: I0527 03:10:32.460908 2316 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:10:32.462957 kubelet[2316]: I0527 03:10:32.462918 2316 server.go:479] "Adding debug handlers to kubelet server" May 27 03:10:32.464121 kubelet[2316]: I0527 03:10:32.464035 2316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:10:32.466027 kubelet[2316]: I0527 03:10:32.465986 2316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:10:32.466416 kubelet[2316]: I0527 03:10:32.466390 2316 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:10:32.466503 kubelet[2316]: E0527 03:10:32.465369 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.11:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.11:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1843439f7321cb7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 03:10:32.46034009 +0000 UTC m=+0.496687558,LastTimestamp:2025-05-27 03:10:32.46034009 +0000 UTC m=+0.496687558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 03:10:32.466686 kubelet[2316]: I0527 03:10:32.466672 2316 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:10:32.466862 kubelet[2316]: I0527 03:10:32.466831 2316 reconciler.go:26] "Reconciler: start to sync state" May 27 03:10:32.467318 kubelet[2316]: I0527 03:10:32.467287 2316 factory.go:221] Registration of the systemd container factory successfully May 27 03:10:32.467411 kubelet[2316]: I0527 03:10:32.467389 2316 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:10:32.467491 kubelet[2316]: I0527 03:10:32.467456 2316 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:10:32.467558 kubelet[2316]: W0527 03:10:32.467388 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused May 27 03:10:32.467594 kubelet[2316]: E0527 03:10:32.467582 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" May 27 03:10:32.467749 kubelet[2316]: I0527 03:10:32.467692 2316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:10:32.468937 kubelet[2316]: E0527 03:10:32.468896 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:10:32.469114 kubelet[2316]: I0527 03:10:32.469079 2316 factory.go:221] Registration of the containerd container factory successfully May 27 03:10:32.469231 kubelet[2316]: E0527 03:10:32.469201 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="200ms" May 27 03:10:32.470197 kubelet[2316]: E0527 03:10:32.470173 2316 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:10:32.485222 kubelet[2316]: I0527 03:10:32.485166 2316 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:10:32.485222 kubelet[2316]: I0527 03:10:32.485190 2316 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:10:32.485222 kubelet[2316]: I0527 03:10:32.485209 2316 state_mem.go:36] "Initialized new in-memory state store" May 27 03:10:32.487896 kubelet[2316]: I0527 03:10:32.487850 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:10:32.489418 kubelet[2316]: I0527 03:10:32.489391 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:10:32.489418 kubelet[2316]: I0527 03:10:32.489419 2316 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:10:32.489488 kubelet[2316]: I0527 03:10:32.489439 2316 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:10:32.489488 kubelet[2316]: I0527 03:10:32.489447 2316 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:10:32.489546 kubelet[2316]: E0527 03:10:32.489498 2316 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:10:32.490298 kubelet[2316]: W0527 03:10:32.490238 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused May 27 03:10:32.490358 kubelet[2316]: E0527 03:10:32.490307 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" May 27 03:10:32.569382 kubelet[2316]: E0527 03:10:32.569293 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:10:32.590661 kubelet[2316]: E0527 03:10:32.590599 2316 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:10:32.670477 kubelet[2316]: E0527 03:10:32.670271 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:10:32.671000 kubelet[2316]: E0527 03:10:32.670901 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="400ms" May 27 03:10:32.770409 kubelet[2316]: E0527 03:10:32.770351 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:10:32.791738 kubelet[2316]: E0527 03:10:32.791653 2316 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:10:32.814603 kubelet[2316]: I0527 03:10:32.814534 2316 policy_none.go:49] "None policy: Start" May 27 03:10:32.814603 kubelet[2316]: I0527 03:10:32.814579 2316 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:10:32.814603 kubelet[2316]: I0527 03:10:32.814594 2316 state_mem.go:35] "Initializing new in-memory state store" May 27 03:10:32.824504 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:10:32.843777 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:10:32.847360 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:10:32.857952 kubelet[2316]: I0527 03:10:32.857907 2316 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:10:32.858215 kubelet[2316]: I0527 03:10:32.858184 2316 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:10:32.858272 kubelet[2316]: I0527 03:10:32.858199 2316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:10:32.858438 kubelet[2316]: I0527 03:10:32.858420 2316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:10:32.859508 kubelet[2316]: E0527 03:10:32.859461 2316 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:10:32.859508 kubelet[2316]: E0527 03:10:32.859503 2316 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 03:10:32.959751 kubelet[2316]: I0527 03:10:32.959570 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:10:32.960238 kubelet[2316]: E0527 03:10:32.960195 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" May 27 03:10:33.072149 kubelet[2316]: E0527 03:10:33.072065 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="800ms" May 27 03:10:33.161925 kubelet[2316]: I0527 03:10:33.161866 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:10:33.162397 kubelet[2316]: E0527 03:10:33.162353 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" May 27 03:10:33.201917 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 27 03:10:33.214102 kubelet[2316]: E0527 03:10:33.213971 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:10:33.216867 systemd[1]: Created slice kubepods-burstable-pod4730dcceb68122c9dfde74d46ef0fbab.slice - libcontainer container kubepods-burstable-pod4730dcceb68122c9dfde74d46ef0fbab.slice. May 27 03:10:33.219045 kubelet[2316]: E0527 03:10:33.219013 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:10:33.221545 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 27 03:10:33.223911 kubelet[2316]: E0527 03:10:33.223879 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:10:33.270691 kubelet[2316]: I0527 03:10:33.270614 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:10:33.270691 kubelet[2316]: I0527 03:10:33.270676 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 03:10:33.270691 kubelet[2316]: I0527 03:10:33.270702 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4730dcceb68122c9dfde74d46ef0fbab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4730dcceb68122c9dfde74d46ef0fbab\") " pod="kube-system/kube-apiserver-localhost" May 27 03:10:33.270958 kubelet[2316]: I0527 03:10:33.270753 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:10:33.270958 kubelet[2316]: I0527 03:10:33.270777 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:10:33.270958 kubelet[2316]: I0527 03:10:33.270797 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:10:33.270958 kubelet[2316]: I0527 03:10:33.270818 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:10:33.270958 kubelet[2316]: I0527 03:10:33.270835 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4730dcceb68122c9dfde74d46ef0fbab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4730dcceb68122c9dfde74d46ef0fbab\") " pod="kube-system/kube-apiserver-localhost" May 27 03:10:33.271090 kubelet[2316]: I0527 03:10:33.270855 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4730dcceb68122c9dfde74d46ef0fbab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4730dcceb68122c9dfde74d46ef0fbab\") " pod="kube-system/kube-apiserver-localhost" May 27 03:10:33.357577 kubelet[2316]: W0527 03:10:33.357500 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused May 27 03:10:33.357767 kubelet[2316]: E0527 03:10:33.357561 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" May 27 03:10:33.456903 kubelet[2316]: W0527 03:10:33.456826 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused May 27 03:10:33.456903 kubelet[2316]: E0527 03:10:33.456897 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" May 27 03:10:33.515685 kubelet[2316]: E0527 03:10:33.515506 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:33.516636 containerd[1550]: time="2025-05-27T03:10:33.516581468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 27 03:10:33.519832 kubelet[2316]: E0527 03:10:33.519792 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:33.520323 containerd[1550]: time="2025-05-27T03:10:33.520253081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4730dcceb68122c9dfde74d46ef0fbab,Namespace:kube-system,Attempt:0,}" May 27 03:10:33.524623 kubelet[2316]: E0527 03:10:33.524587 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:33.524935 containerd[1550]: time="2025-05-27T03:10:33.524897374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 27 03:10:33.564934 kubelet[2316]: I0527 03:10:33.564866 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:10:33.565480 kubelet[2316]: E0527 03:10:33.565423 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" May 27 03:10:33.650625 kubelet[2316]: W0527 03:10:33.650529 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused May 27 03:10:33.650625 kubelet[2316]: E0527 03:10:33.650612 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" May 27 03:10:33.805561 kubelet[2316]: W0527 03:10:33.805358 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.11:6443: connect: connection refused May 27 03:10:33.805561 kubelet[2316]: E0527 03:10:33.805447 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" May 27 03:10:33.873409 kubelet[2316]: E0527 03:10:33.873281 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.11:6443: connect: connection refused" interval="1.6s" May 27 03:10:34.019616 containerd[1550]: time="2025-05-27T03:10:34.018407096Z" level=info msg="connecting to shim 447d1d0eb4985409d574718fb09c25a590613d6df02879f240cbb35171229a16" address="unix:///run/containerd/s/7c30d0094d48c6a30684da000595e309026e08509087d1c2e79e8a112ec6af1e" namespace=k8s.io protocol=ttrpc version=3 May 27 03:10:34.031073 containerd[1550]: time="2025-05-27T03:10:34.031010263Z" level=info msg="connecting to shim 254870b1a9c4df70fc5b26096286beba3a1d3cf7dec55a75acb2dd0f9fd8ca3f" address="unix:///run/containerd/s/06cf949a2ce5dfc962892b716f8cf661fa4a69de6081b71ca4669445276cc5b2" namespace=k8s.io protocol=ttrpc version=3 May 27 03:10:34.061950 systemd[1]: Started cri-containerd-447d1d0eb4985409d574718fb09c25a590613d6df02879f240cbb35171229a16.scope - libcontainer container 447d1d0eb4985409d574718fb09c25a590613d6df02879f240cbb35171229a16. May 27 03:10:34.094729 containerd[1550]: time="2025-05-27T03:10:34.094190110Z" level=info msg="connecting to shim 039fdd1ebd0d0691a17669377fa6e1e941c594f219eb21e51c72f92e90aa9749" address="unix:///run/containerd/s/430ab3eccef87c2486ae9e919cc2e3d7a72d49a1154b527e109df66e2c8358c1" namespace=k8s.io protocol=ttrpc version=3 May 27 03:10:34.097286 systemd[1]: Started cri-containerd-254870b1a9c4df70fc5b26096286beba3a1d3cf7dec55a75acb2dd0f9fd8ca3f.scope - libcontainer container 254870b1a9c4df70fc5b26096286beba3a1d3cf7dec55a75acb2dd0f9fd8ca3f. May 27 03:10:34.162324 systemd[1]: Started cri-containerd-039fdd1ebd0d0691a17669377fa6e1e941c594f219eb21e51c72f92e90aa9749.scope - libcontainer container 039fdd1ebd0d0691a17669377fa6e1e941c594f219eb21e51c72f92e90aa9749. May 27 03:10:34.168520 containerd[1550]: time="2025-05-27T03:10:34.168458146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4730dcceb68122c9dfde74d46ef0fbab,Namespace:kube-system,Attempt:0,} returns sandbox id \"447d1d0eb4985409d574718fb09c25a590613d6df02879f240cbb35171229a16\"" May 27 03:10:34.171164 kubelet[2316]: E0527 03:10:34.171037 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:34.174050 containerd[1550]: time="2025-05-27T03:10:34.174012052Z" level=info msg="CreateContainer within sandbox \"447d1d0eb4985409d574718fb09c25a590613d6df02879f240cbb35171229a16\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:10:34.250845 containerd[1550]: time="2025-05-27T03:10:34.250785248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"254870b1a9c4df70fc5b26096286beba3a1d3cf7dec55a75acb2dd0f9fd8ca3f\"" May 27 03:10:34.251831 kubelet[2316]: E0527 03:10:34.251786 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:34.253951 containerd[1550]: time="2025-05-27T03:10:34.253906849Z" level=info msg="CreateContainer within sandbox \"254870b1a9c4df70fc5b26096286beba3a1d3cf7dec55a75acb2dd0f9fd8ca3f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:10:34.272746 containerd[1550]: time="2025-05-27T03:10:34.272666158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"039fdd1ebd0d0691a17669377fa6e1e941c594f219eb21e51c72f92e90aa9749\"" May 27 03:10:34.273567 kubelet[2316]: E0527 03:10:34.273539 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:34.275496 containerd[1550]: time="2025-05-27T03:10:34.275368031Z" level=info msg="CreateContainer within sandbox \"039fdd1ebd0d0691a17669377fa6e1e941c594f219eb21e51c72f92e90aa9749\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:10:34.281829 containerd[1550]: time="2025-05-27T03:10:34.281739101Z" level=info msg="Container ad0d7922935f57b9323f2f738ff398d6a76b4576ea3489cb3ea684c678beece1: CDI devices from CRI Config.CDIDevices: []" May 27 03:10:34.360615 containerd[1550]: time="2025-05-27T03:10:34.360410249Z" level=info msg="CreateContainer within sandbox \"447d1d0eb4985409d574718fb09c25a590613d6df02879f240cbb35171229a16\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ad0d7922935f57b9323f2f738ff398d6a76b4576ea3489cb3ea684c678beece1\"" May 27 03:10:34.361657 containerd[1550]: time="2025-05-27T03:10:34.361594626Z" level=info msg="StartContainer for \"ad0d7922935f57b9323f2f738ff398d6a76b4576ea3489cb3ea684c678beece1\"" May 27 03:10:34.363060 containerd[1550]: time="2025-05-27T03:10:34.363016073Z" level=info msg="connecting to shim ad0d7922935f57b9323f2f738ff398d6a76b4576ea3489cb3ea684c678beece1" address="unix:///run/containerd/s/7c30d0094d48c6a30684da000595e309026e08509087d1c2e79e8a112ec6af1e" protocol=ttrpc version=3 May 27 03:10:34.367174 kubelet[2316]: I0527 03:10:34.367144 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:10:34.367636 kubelet[2316]: E0527 03:10:34.367602 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.11:6443/api/v1/nodes\": dial tcp 10.0.0.11:6443: connect: connection refused" node="localhost" May 27 03:10:34.389049 containerd[1550]: time="2025-05-27T03:10:34.388991503Z" level=info msg="Container c18129ea570543aaf3db91575d03a21dc644f2f0ae9654b15a25619c43ade62a: CDI devices from CRI Config.CDIDevices: []" May 27 03:10:34.396168 systemd[1]: Started cri-containerd-ad0d7922935f57b9323f2f738ff398d6a76b4576ea3489cb3ea684c678beece1.scope - libcontainer container ad0d7922935f57b9323f2f738ff398d6a76b4576ea3489cb3ea684c678beece1. May 27 03:10:34.423936 containerd[1550]: time="2025-05-27T03:10:34.423880971Z" level=info msg="Container a6654017380bc4f6cfe1c990c89954afc2de7ed3ff237873dd8ddfd43c902895: CDI devices from CRI Config.CDIDevices: []" May 27 03:10:34.444055 kubelet[2316]: E0527 03:10:34.443699 2316 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.11:6443: connect: connection refused" logger="UnhandledError" May 27 03:10:34.569473 containerd[1550]: time="2025-05-27T03:10:34.569421394Z" level=info msg="StartContainer for \"ad0d7922935f57b9323f2f738ff398d6a76b4576ea3489cb3ea684c678beece1\" returns successfully" May 27 03:10:34.642782 containerd[1550]: time="2025-05-27T03:10:34.642586756Z" level=info msg="CreateContainer within sandbox \"254870b1a9c4df70fc5b26096286beba3a1d3cf7dec55a75acb2dd0f9fd8ca3f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c18129ea570543aaf3db91575d03a21dc644f2f0ae9654b15a25619c43ade62a\"" May 27 03:10:34.643351 containerd[1550]: time="2025-05-27T03:10:34.643295864Z" level=info msg="StartContainer for \"c18129ea570543aaf3db91575d03a21dc644f2f0ae9654b15a25619c43ade62a\"" May 27 03:10:34.645134 containerd[1550]: time="2025-05-27T03:10:34.645082619Z" level=info msg="connecting to shim c18129ea570543aaf3db91575d03a21dc644f2f0ae9654b15a25619c43ade62a" address="unix:///run/containerd/s/06cf949a2ce5dfc962892b716f8cf661fa4a69de6081b71ca4669445276cc5b2" protocol=ttrpc version=3 May 27 03:10:34.673428 containerd[1550]: time="2025-05-27T03:10:34.673361160Z" level=info msg="CreateContainer within sandbox \"039fdd1ebd0d0691a17669377fa6e1e941c594f219eb21e51c72f92e90aa9749\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a6654017380bc4f6cfe1c990c89954afc2de7ed3ff237873dd8ddfd43c902895\"" May 27 03:10:34.674066 systemd[1]: Started cri-containerd-c18129ea570543aaf3db91575d03a21dc644f2f0ae9654b15a25619c43ade62a.scope - libcontainer container c18129ea570543aaf3db91575d03a21dc644f2f0ae9654b15a25619c43ade62a. May 27 03:10:34.674850 containerd[1550]: time="2025-05-27T03:10:34.674572652Z" level=info msg="StartContainer for \"a6654017380bc4f6cfe1c990c89954afc2de7ed3ff237873dd8ddfd43c902895\"" May 27 03:10:34.677129 containerd[1550]: time="2025-05-27T03:10:34.677086731Z" level=info msg="connecting to shim a6654017380bc4f6cfe1c990c89954afc2de7ed3ff237873dd8ddfd43c902895" address="unix:///run/containerd/s/430ab3eccef87c2486ae9e919cc2e3d7a72d49a1154b527e109df66e2c8358c1" protocol=ttrpc version=3 May 27 03:10:34.713965 systemd[1]: Started cri-containerd-a6654017380bc4f6cfe1c990c89954afc2de7ed3ff237873dd8ddfd43c902895.scope - libcontainer container a6654017380bc4f6cfe1c990c89954afc2de7ed3ff237873dd8ddfd43c902895. May 27 03:10:34.913035 containerd[1550]: time="2025-05-27T03:10:34.912804684Z" level=info msg="StartContainer for \"a6654017380bc4f6cfe1c990c89954afc2de7ed3ff237873dd8ddfd43c902895\" returns successfully" May 27 03:10:34.913825 containerd[1550]: time="2025-05-27T03:10:34.913795340Z" level=info msg="StartContainer for \"c18129ea570543aaf3db91575d03a21dc644f2f0ae9654b15a25619c43ade62a\" returns successfully" May 27 03:10:35.583550 kubelet[2316]: E0527 03:10:35.583508 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:10:35.584836 kubelet[2316]: E0527 03:10:35.584808 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:35.590447 kubelet[2316]: E0527 03:10:35.590394 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:10:35.590609 kubelet[2316]: E0527 03:10:35.590589 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:35.592279 kubelet[2316]: E0527 03:10:35.592117 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:10:35.592350 kubelet[2316]: E0527 03:10:35.592324 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:35.972545 kubelet[2316]: I0527 03:10:35.970540 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:10:36.393322 kubelet[2316]: E0527 03:10:36.393027 2316 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 03:10:36.474368 kubelet[2316]: I0527 03:10:36.474298 2316 apiserver.go:52] "Watching apiserver" May 27 03:10:36.475973 kubelet[2316]: I0527 03:10:36.475917 2316 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:10:36.523388 kubelet[2316]: E0527 03:10:36.523221 2316 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1843439f7321cb7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 03:10:32.46034009 +0000 UTC m=+0.496687558,LastTimestamp:2025-05-27 03:10:32.46034009 +0000 UTC m=+0.496687558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 03:10:36.567401 kubelet[2316]: I0527 03:10:36.567347 2316 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:10:36.569551 kubelet[2316]: I0527 03:10:36.569498 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:10:36.575655 kubelet[2316]: E0527 03:10:36.575597 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 03:10:36.575655 kubelet[2316]: I0527 03:10:36.575640 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:10:36.578290 kubelet[2316]: E0527 03:10:36.578245 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 03:10:36.578290 kubelet[2316]: I0527 03:10:36.578279 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:10:36.579872 kubelet[2316]: E0527 03:10:36.579841 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 03:10:36.587106 kubelet[2316]: I0527 03:10:36.587060 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:10:36.587106 kubelet[2316]: I0527 03:10:36.587106 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:10:36.587608 kubelet[2316]: I0527 03:10:36.587061 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:10:36.589749 kubelet[2316]: E0527 03:10:36.589412 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 03:10:36.589749 kubelet[2316]: E0527 03:10:36.589442 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 03:10:36.589749 kubelet[2316]: E0527 03:10:36.589466 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 03:10:36.589749 kubelet[2316]: E0527 03:10:36.589621 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:36.589749 kubelet[2316]: E0527 03:10:36.589630 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:36.589945 kubelet[2316]: E0527 03:10:36.589870 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:37.588492 kubelet[2316]: I0527 03:10:37.588457 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:10:37.615481 kubelet[2316]: E0527 03:10:37.615432 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:38.590032 kubelet[2316]: E0527 03:10:38.589992 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:38.901243 systemd[1]: Reload requested from client PID 2595 ('systemctl') (unit session-7.scope)... May 27 03:10:38.901265 systemd[1]: Reloading... May 27 03:10:39.014754 zram_generator::config[2638]: No configuration found. May 27 03:10:39.121084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:10:39.260389 systemd[1]: Reloading finished in 358 ms. May 27 03:10:39.297274 kubelet[2316]: I0527 03:10:39.297195 2316 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:10:39.297337 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:10:39.320187 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:10:39.320545 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:10:39.320618 systemd[1]: kubelet.service: Consumed 1.087s CPU time, 131.1M memory peak. May 27 03:10:39.322793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:10:39.556539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:10:39.569257 (kubelet)[2683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:10:39.625025 kubelet[2683]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:10:39.625025 kubelet[2683]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:10:39.625025 kubelet[2683]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:10:39.625560 kubelet[2683]: I0527 03:10:39.625290 2683 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:10:39.634081 kubelet[2683]: I0527 03:10:39.634031 2683 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:10:39.634081 kubelet[2683]: I0527 03:10:39.634068 2683 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:10:39.634400 kubelet[2683]: I0527 03:10:39.634385 2683 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:10:39.635901 kubelet[2683]: I0527 03:10:39.635873 2683 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 03:10:39.638645 kubelet[2683]: I0527 03:10:39.638584 2683 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:10:39.645528 kubelet[2683]: I0527 03:10:39.645413 2683 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:10:39.652418 kubelet[2683]: I0527 03:10:39.652369 2683 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:10:39.652762 kubelet[2683]: I0527 03:10:39.652676 2683 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:10:39.652960 kubelet[2683]: I0527 03:10:39.652769 2683 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:10:39.652960 kubelet[2683]: I0527 03:10:39.652960 2683 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:10:39.653072 kubelet[2683]: I0527 03:10:39.652970 2683 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:10:39.653072 kubelet[2683]: I0527 03:10:39.653021 2683 state_mem.go:36] "Initialized new in-memory state store" May 27 03:10:39.654117 kubelet[2683]: I0527 03:10:39.653189 2683 kubelet.go:446] "Attempting to sync node with API server" May 27 03:10:39.654117 kubelet[2683]: I0527 03:10:39.653224 2683 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:10:39.654117 kubelet[2683]: I0527 03:10:39.653252 2683 kubelet.go:352] "Adding apiserver pod source" May 27 03:10:39.654117 kubelet[2683]: I0527 03:10:39.653268 2683 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:10:39.656102 kubelet[2683]: I0527 03:10:39.656078 2683 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:10:39.656501 kubelet[2683]: I0527 03:10:39.656487 2683 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:10:39.658739 kubelet[2683]: I0527 03:10:39.658444 2683 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:10:39.658739 kubelet[2683]: I0527 03:10:39.658582 2683 server.go:1287] "Started kubelet" May 27 03:10:39.658876 kubelet[2683]: I0527 03:10:39.658830 2683 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:10:39.660959 kubelet[2683]: I0527 03:10:39.660888 2683 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:10:39.661352 kubelet[2683]: I0527 03:10:39.661290 2683 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:10:39.662631 kubelet[2683]: I0527 03:10:39.662603 2683 server.go:479] "Adding debug handlers to kubelet server" May 27 03:10:39.666123 kubelet[2683]: I0527 03:10:39.666077 2683 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:10:39.667527 kubelet[2683]: I0527 03:10:39.666635 2683 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:10:39.668137 kubelet[2683]: I0527 03:10:39.668105 2683 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:10:39.668259 kubelet[2683]: I0527 03:10:39.668233 2683 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:10:39.668400 kubelet[2683]: I0527 03:10:39.668376 2683 reconciler.go:26] "Reconciler: start to sync state" May 27 03:10:39.670764 kubelet[2683]: I0527 03:10:39.670300 2683 factory.go:221] Registration of the systemd container factory successfully May 27 03:10:39.673649 kubelet[2683]: I0527 03:10:39.672128 2683 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:10:39.675226 kubelet[2683]: E0527 03:10:39.675205 2683 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:10:39.675798 kubelet[2683]: I0527 03:10:39.675780 2683 factory.go:221] Registration of the containerd container factory successfully May 27 03:10:39.690354 kubelet[2683]: I0527 03:10:39.690269 2683 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:10:39.692338 kubelet[2683]: I0527 03:10:39.692284 2683 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:10:39.692386 kubelet[2683]: I0527 03:10:39.692354 2683 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:10:39.692428 kubelet[2683]: I0527 03:10:39.692387 2683 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:10:39.692428 kubelet[2683]: I0527 03:10:39.692404 2683 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:10:39.692510 kubelet[2683]: E0527 03:10:39.692474 2683 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:10:39.736040 kubelet[2683]: I0527 03:10:39.735995 2683 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:10:39.736040 kubelet[2683]: I0527 03:10:39.736020 2683 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:10:39.736040 kubelet[2683]: I0527 03:10:39.736042 2683 state_mem.go:36] "Initialized new in-memory state store" May 27 03:10:39.736269 kubelet[2683]: I0527 03:10:39.736255 2683 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:10:39.736298 kubelet[2683]: I0527 03:10:39.736267 2683 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:10:39.736298 kubelet[2683]: I0527 03:10:39.736285 2683 policy_none.go:49] "None policy: Start" May 27 03:10:39.736298 kubelet[2683]: I0527 03:10:39.736294 2683 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:10:39.736376 kubelet[2683]: I0527 03:10:39.736304 2683 state_mem.go:35] "Initializing new in-memory state store" May 27 03:10:39.736448 kubelet[2683]: I0527 03:10:39.736422 2683 state_mem.go:75] "Updated machine memory state" May 27 03:10:39.741984 kubelet[2683]: I0527 03:10:39.741937 2683 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:10:39.742320 kubelet[2683]: I0527 03:10:39.742286 2683 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:10:39.742427 kubelet[2683]: I0527 03:10:39.742375 2683 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:10:39.742832 kubelet[2683]: I0527 03:10:39.742791 2683 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:10:39.744705 kubelet[2683]: E0527 03:10:39.744671 2683 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:10:39.793791 kubelet[2683]: I0527 03:10:39.793699 2683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:10:39.793990 kubelet[2683]: I0527 03:10:39.793872 2683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:10:39.794026 kubelet[2683]: I0527 03:10:39.793872 2683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:10:39.800809 kubelet[2683]: E0527 03:10:39.800747 2683 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:10:39.853656 kubelet[2683]: I0527 03:10:39.851359 2683 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:10:39.869892 kubelet[2683]: I0527 03:10:39.869849 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4730dcceb68122c9dfde74d46ef0fbab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4730dcceb68122c9dfde74d46ef0fbab\") " pod="kube-system/kube-apiserver-localhost" May 27 03:10:39.869892 kubelet[2683]: I0527 03:10:39.869887 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4730dcceb68122c9dfde74d46ef0fbab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4730dcceb68122c9dfde74d46ef0fbab\") " pod="kube-system/kube-apiserver-localhost" May 27 03:10:39.870030 kubelet[2683]: I0527 03:10:39.869926 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4730dcceb68122c9dfde74d46ef0fbab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4730dcceb68122c9dfde74d46ef0fbab\") " pod="kube-system/kube-apiserver-localhost" May 27 03:10:39.870030 kubelet[2683]: I0527 03:10:39.869951 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:10:39.870030 kubelet[2683]: I0527 03:10:39.869973 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 03:10:39.870030 kubelet[2683]: I0527 03:10:39.869994 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:10:39.870030 kubelet[2683]: I0527 03:10:39.870013 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:10:39.870154 kubelet[2683]: I0527 03:10:39.870056 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:10:39.870154 kubelet[2683]: I0527 03:10:39.870098 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:10:39.982674 kubelet[2683]: I0527 03:10:39.982630 2683 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 03:10:39.982898 kubelet[2683]: I0527 03:10:39.982784 2683 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:10:40.076830 sudo[2719]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 03:10:40.077184 sudo[2719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 03:10:40.100624 kubelet[2683]: E0527 03:10:40.100511 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:40.100966 kubelet[2683]: E0527 03:10:40.100878 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:40.101282 kubelet[2683]: E0527 03:10:40.101230 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:40.631418 sudo[2719]: pam_unix(sudo:session): session closed for user root May 27 03:10:40.654612 kubelet[2683]: I0527 03:10:40.654543 2683 apiserver.go:52] "Watching apiserver" May 27 03:10:40.668880 kubelet[2683]: I0527 03:10:40.668805 2683 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:10:40.710437 kubelet[2683]: I0527 03:10:40.710358 2683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:10:40.711048 kubelet[2683]: I0527 03:10:40.710981 2683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:10:40.711484 kubelet[2683]: E0527 03:10:40.711443 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:40.716629 kubelet[2683]: E0527 03:10:40.715862 2683 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:10:40.716629 kubelet[2683]: E0527 03:10:40.716063 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:40.718931 kubelet[2683]: E0527 03:10:40.718896 2683 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 27 03:10:40.719268 kubelet[2683]: E0527 03:10:40.719229 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:40.741619 kubelet[2683]: I0527 03:10:40.741547 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7415253179999999 podStartE2EDuration="1.741525318s" podCreationTimestamp="2025-05-27 03:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:10:40.734123022 +0000 UTC m=+1.160431038" watchObservedRunningTime="2025-05-27 03:10:40.741525318 +0000 UTC m=+1.167833334" May 27 03:10:40.749882 kubelet[2683]: I0527 03:10:40.749804 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.749779806 podStartE2EDuration="1.749779806s" podCreationTimestamp="2025-05-27 03:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:10:40.749314158 +0000 UTC m=+1.175622174" watchObservedRunningTime="2025-05-27 03:10:40.749779806 +0000 UTC m=+1.176087852" May 27 03:10:40.750025 kubelet[2683]: I0527 03:10:40.749908 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.749901775 podStartE2EDuration="3.749901775s" podCreationTimestamp="2025-05-27 03:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:10:40.742046075 +0000 UTC m=+1.168354091" watchObservedRunningTime="2025-05-27 03:10:40.749901775 +0000 UTC m=+1.176209791" May 27 03:10:41.712623 kubelet[2683]: E0527 03:10:41.712281 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:41.713127 kubelet[2683]: E0527 03:10:41.712666 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:41.713127 kubelet[2683]: E0527 03:10:41.712910 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:41.906975 sudo[1766]: pam_unix(sudo:session): session closed for user root May 27 03:10:41.908573 sshd[1765]: Connection closed by 10.0.0.1 port 35790 May 27 03:10:41.909300 sshd-session[1763]: pam_unix(sshd:session): session closed for user core May 27 03:10:41.915195 systemd[1]: sshd@6-10.0.0.11:22-10.0.0.1:35790.service: Deactivated successfully. May 27 03:10:41.917644 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:10:41.917970 systemd[1]: session-7.scope: Consumed 4.345s CPU time, 263.4M memory peak. May 27 03:10:41.919279 systemd-logind[1537]: Session 7 logged out. Waiting for processes to exit. May 27 03:10:41.920635 systemd-logind[1537]: Removed session 7. May 27 03:10:42.713553 kubelet[2683]: E0527 03:10:42.713500 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:42.714033 kubelet[2683]: E0527 03:10:42.713624 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:42.714033 kubelet[2683]: E0527 03:10:42.713663 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:44.315571 kubelet[2683]: I0527 03:10:44.315513 2683 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:10:44.316051 containerd[1550]: time="2025-05-27T03:10:44.315812904Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:10:44.316285 kubelet[2683]: I0527 03:10:44.316128 2683 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:10:45.423242 systemd[1]: Created slice kubepods-besteffort-pod9c6268cc_24a7_4623_85dc_24888fbea2f2.slice - libcontainer container kubepods-besteffort-pod9c6268cc_24a7_4623_85dc_24888fbea2f2.slice. May 27 03:10:45.455646 systemd[1]: Created slice kubepods-burstable-podb0e3475b_616a_479f_82ad_9ec90d101cdd.slice - libcontainer container kubepods-burstable-podb0e3475b_616a_479f_82ad_9ec90d101cdd.slice. May 27 03:10:45.465759 systemd[1]: Created slice kubepods-besteffort-pod647caad1_1951_4802_a9e8_e135e8876471.slice - libcontainer container kubepods-besteffort-pod647caad1_1951_4802_a9e8_e135e8876471.slice. May 27 03:10:45.503618 kubelet[2683]: I0527 03:10:45.503524 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c6268cc-24a7-4623-85dc-24888fbea2f2-xtables-lock\") pod \"kube-proxy-z8wkw\" (UID: \"9c6268cc-24a7-4623-85dc-24888fbea2f2\") " pod="kube-system/kube-proxy-z8wkw" May 27 03:10:45.503618 kubelet[2683]: I0527 03:10:45.503590 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c6268cc-24a7-4623-85dc-24888fbea2f2-kube-proxy\") pod \"kube-proxy-z8wkw\" (UID: \"9c6268cc-24a7-4623-85dc-24888fbea2f2\") " pod="kube-system/kube-proxy-z8wkw" May 27 03:10:45.503618 kubelet[2683]: I0527 03:10:45.503625 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-lib-modules\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.504185 kubelet[2683]: I0527 03:10:45.503647 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0e3475b-616a-479f-82ad-9ec90d101cdd-cilium-config-path\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.504185 kubelet[2683]: I0527 03:10:45.503673 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-host-proc-sys-net\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.504185 kubelet[2683]: I0527 03:10:45.503693 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0e3475b-616a-479f-82ad-9ec90d101cdd-hubble-tls\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.504185 kubelet[2683]: I0527 03:10:45.503773 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62xpf\" (UniqueName: \"kubernetes.io/projected/b0e3475b-616a-479f-82ad-9ec90d101cdd-kube-api-access-62xpf\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.504185 kubelet[2683]: I0527 03:10:45.503798 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9vsb\" (UniqueName: \"kubernetes.io/projected/9c6268cc-24a7-4623-85dc-24888fbea2f2-kube-api-access-r9vsb\") pod \"kube-proxy-z8wkw\" (UID: \"9c6268cc-24a7-4623-85dc-24888fbea2f2\") " pod="kube-system/kube-proxy-z8wkw" May 27 03:10:45.504306 kubelet[2683]: I0527 03:10:45.503820 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-bpf-maps\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.504306 kubelet[2683]: I0527 03:10:45.503842 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-cilium-run\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.504306 kubelet[2683]: I0527 03:10:45.503865 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c6268cc-24a7-4623-85dc-24888fbea2f2-lib-modules\") pod \"kube-proxy-z8wkw\" (UID: \"9c6268cc-24a7-4623-85dc-24888fbea2f2\") " pod="kube-system/kube-proxy-z8wkw" May 27 03:10:45.504306 kubelet[2683]: I0527 03:10:45.503890 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-hostproc\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.504306 kubelet[2683]: I0527 03:10:45.503910 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-etc-cni-netd\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.504306 kubelet[2683]: I0527 03:10:45.503939 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-host-proc-sys-kernel\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.504463 kubelet[2683]: I0527 03:10:45.503963 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/647caad1-1951-4802-a9e8-e135e8876471-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-66b79\" (UID: \"647caad1-1951-4802-a9e8-e135e8876471\") " pod="kube-system/cilium-operator-6c4d7847fc-66b79" May 27 03:10:45.504463 kubelet[2683]: I0527 03:10:45.503989 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-cni-path\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.504463 kubelet[2683]: I0527 03:10:45.504084 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0e3475b-616a-479f-82ad-9ec90d101cdd-clustermesh-secrets\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.504463 kubelet[2683]: I0527 03:10:45.504149 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9ghr\" (UniqueName: \"kubernetes.io/projected/647caad1-1951-4802-a9e8-e135e8876471-kube-api-access-d9ghr\") pod \"cilium-operator-6c4d7847fc-66b79\" (UID: \"647caad1-1951-4802-a9e8-e135e8876471\") " pod="kube-system/cilium-operator-6c4d7847fc-66b79" May 27 03:10:45.504463 kubelet[2683]: I0527 03:10:45.504183 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-cilium-cgroup\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.504577 kubelet[2683]: I0527 03:10:45.504232 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-xtables-lock\") pod \"cilium-8xbrq\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " pod="kube-system/cilium-8xbrq" May 27 03:10:45.750161 kubelet[2683]: E0527 03:10:45.749976 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:45.751096 containerd[1550]: time="2025-05-27T03:10:45.751058204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z8wkw,Uid:9c6268cc-24a7-4623-85dc-24888fbea2f2,Namespace:kube-system,Attempt:0,}" May 27 03:10:45.762362 kubelet[2683]: E0527 03:10:45.762294 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:45.762951 containerd[1550]: time="2025-05-27T03:10:45.762896904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8xbrq,Uid:b0e3475b-616a-479f-82ad-9ec90d101cdd,Namespace:kube-system,Attempt:0,}" May 27 03:10:45.768684 kubelet[2683]: E0527 03:10:45.768639 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:45.769287 containerd[1550]: time="2025-05-27T03:10:45.769234501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-66b79,Uid:647caad1-1951-4802-a9e8-e135e8876471,Namespace:kube-system,Attempt:0,}" May 27 03:10:45.779125 containerd[1550]: time="2025-05-27T03:10:45.779080151Z" level=info msg="connecting to shim d8cdc874e33a66cd23a76ae750e4429d2399e619785bd60abb35a7ec13c79d77" address="unix:///run/containerd/s/5aab083acee608c9dace14c6d6bf3c55b1958849fde056cb8b41ffc90c413e55" namespace=k8s.io protocol=ttrpc version=3 May 27 03:10:45.810016 systemd[1]: Started cri-containerd-d8cdc874e33a66cd23a76ae750e4429d2399e619785bd60abb35a7ec13c79d77.scope - libcontainer container d8cdc874e33a66cd23a76ae750e4429d2399e619785bd60abb35a7ec13c79d77. May 27 03:10:45.817852 containerd[1550]: time="2025-05-27T03:10:45.817746694Z" level=info msg="connecting to shim 56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a" address="unix:///run/containerd/s/9989cd07b0b01ebc61f136c4e0b542591b405e2b783c6f812c4a8715d250d624" namespace=k8s.io protocol=ttrpc version=3 May 27 03:10:45.819930 containerd[1550]: time="2025-05-27T03:10:45.819888634Z" level=info msg="connecting to shim 46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6" address="unix:///run/containerd/s/0f234fd1dc170145eb09e600db4c755da1d5acab931a85c192e989971ff32cdd" namespace=k8s.io protocol=ttrpc version=3 May 27 03:10:45.847128 systemd[1]: Started cri-containerd-46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6.scope - libcontainer container 46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6. May 27 03:10:45.851257 systemd[1]: Started cri-containerd-56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a.scope - libcontainer container 56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a. May 27 03:10:45.852457 containerd[1550]: time="2025-05-27T03:10:45.851913786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z8wkw,Uid:9c6268cc-24a7-4623-85dc-24888fbea2f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8cdc874e33a66cd23a76ae750e4429d2399e619785bd60abb35a7ec13c79d77\"" May 27 03:10:45.854122 kubelet[2683]: E0527 03:10:45.854079 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:45.862307 containerd[1550]: time="2025-05-27T03:10:45.862223668Z" level=info msg="CreateContainer within sandbox \"d8cdc874e33a66cd23a76ae750e4429d2399e619785bd60abb35a7ec13c79d77\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:10:45.891173 containerd[1550]: time="2025-05-27T03:10:45.891076296Z" level=info msg="Container 29e4fba318a6d0e327da2115ad68975d898055854f7595e26cf0837147ee2cbb: CDI devices from CRI Config.CDIDevices: []" May 27 03:10:45.892470 containerd[1550]: time="2025-05-27T03:10:45.892414815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8xbrq,Uid:b0e3475b-616a-479f-82ad-9ec90d101cdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\"" May 27 03:10:45.893589 kubelet[2683]: E0527 03:10:45.893543 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:45.897060 containerd[1550]: time="2025-05-27T03:10:45.896848255Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 03:10:45.901786 containerd[1550]: time="2025-05-27T03:10:45.901737357Z" level=info msg="CreateContainer within sandbox \"d8cdc874e33a66cd23a76ae750e4429d2399e619785bd60abb35a7ec13c79d77\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"29e4fba318a6d0e327da2115ad68975d898055854f7595e26cf0837147ee2cbb\"" May 27 03:10:45.902433 containerd[1550]: time="2025-05-27T03:10:45.902406617Z" level=info msg="StartContainer for \"29e4fba318a6d0e327da2115ad68975d898055854f7595e26cf0837147ee2cbb\"" May 27 03:10:45.903755 containerd[1550]: time="2025-05-27T03:10:45.903705352Z" level=info msg="connecting to shim 29e4fba318a6d0e327da2115ad68975d898055854f7595e26cf0837147ee2cbb" address="unix:///run/containerd/s/5aab083acee608c9dace14c6d6bf3c55b1958849fde056cb8b41ffc90c413e55" protocol=ttrpc version=3 May 27 03:10:45.910672 containerd[1550]: time="2025-05-27T03:10:45.910626946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-66b79,Uid:647caad1-1951-4802-a9e8-e135e8876471,Namespace:kube-system,Attempt:0,} returns sandbox id \"56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a\"" May 27 03:10:45.911401 kubelet[2683]: E0527 03:10:45.911379 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:45.933890 systemd[1]: Started cri-containerd-29e4fba318a6d0e327da2115ad68975d898055854f7595e26cf0837147ee2cbb.scope - libcontainer container 29e4fba318a6d0e327da2115ad68975d898055854f7595e26cf0837147ee2cbb. May 27 03:10:45.988217 containerd[1550]: time="2025-05-27T03:10:45.988158504Z" level=info msg="StartContainer for \"29e4fba318a6d0e327da2115ad68975d898055854f7595e26cf0837147ee2cbb\" returns successfully" May 27 03:10:46.723945 kubelet[2683]: E0527 03:10:46.723899 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:46.733149 kubelet[2683]: I0527 03:10:46.733083 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z8wkw" podStartSLOduration=1.733068642 podStartE2EDuration="1.733068642s" podCreationTimestamp="2025-05-27 03:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:10:46.732876377 +0000 UTC m=+7.159184443" watchObservedRunningTime="2025-05-27 03:10:46.733068642 +0000 UTC m=+7.159376658" May 27 03:10:49.283663 kubelet[2683]: E0527 03:10:49.283501 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:49.728931 kubelet[2683]: E0527 03:10:49.728759 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:50.730508 kubelet[2683]: E0527 03:10:50.730472 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:51.202819 kubelet[2683]: E0527 03:10:51.202427 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:51.731778 kubelet[2683]: E0527 03:10:51.731699 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:51.736768 kubelet[2683]: E0527 03:10:51.736738 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:52.571527 update_engine[1540]: I20250527 03:10:52.570775 1540 update_attempter.cc:509] Updating boot flags... May 27 03:10:53.575829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3280990958.mount: Deactivated successfully. May 27 03:10:55.503749 containerd[1550]: time="2025-05-27T03:10:55.503663426Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:55.504884 containerd[1550]: time="2025-05-27T03:10:55.504838465Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 27 03:10:55.506106 containerd[1550]: time="2025-05-27T03:10:55.506022647Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:55.507429 containerd[1550]: time="2025-05-27T03:10:55.507398201Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.610515839s" May 27 03:10:55.507429 containerd[1550]: time="2025-05-27T03:10:55.507428611Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 27 03:10:55.508461 containerd[1550]: time="2025-05-27T03:10:55.508399278Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 03:10:55.509481 containerd[1550]: time="2025-05-27T03:10:55.509422636Z" level=info msg="CreateContainer within sandbox \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 03:10:55.521362 containerd[1550]: time="2025-05-27T03:10:55.521288423Z" level=info msg="Container 9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1: CDI devices from CRI Config.CDIDevices: []" May 27 03:10:55.526296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037381099.mount: Deactivated successfully. May 27 03:10:55.536641 containerd[1550]: time="2025-05-27T03:10:55.536561396Z" level=info msg="CreateContainer within sandbox \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\"" May 27 03:10:55.537381 containerd[1550]: time="2025-05-27T03:10:55.537199974Z" level=info msg="StartContainer for \"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\"" May 27 03:10:55.538314 containerd[1550]: time="2025-05-27T03:10:55.538282429Z" level=info msg="connecting to shim 9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1" address="unix:///run/containerd/s/0f234fd1dc170145eb09e600db4c755da1d5acab931a85c192e989971ff32cdd" protocol=ttrpc version=3 May 27 03:10:55.580989 systemd[1]: Started cri-containerd-9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1.scope - libcontainer container 9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1. May 27 03:10:55.613003 containerd[1550]: time="2025-05-27T03:10:55.612963446Z" level=info msg="StartContainer for \"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\" returns successfully" May 27 03:10:55.624166 systemd[1]: cri-containerd-9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1.scope: Deactivated successfully. May 27 03:10:55.625354 containerd[1550]: time="2025-05-27T03:10:55.625317141Z" level=info msg="received exit event container_id:\"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\" id:\"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\" pid:3120 exited_at:{seconds:1748315455 nanos:624889512}" May 27 03:10:55.625431 containerd[1550]: time="2025-05-27T03:10:55.625380167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\" id:\"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\" pid:3120 exited_at:{seconds:1748315455 nanos:624889512}" May 27 03:10:55.649091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1-rootfs.mount: Deactivated successfully. May 27 03:10:55.750893 kubelet[2683]: E0527 03:10:55.750844 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:56.755092 kubelet[2683]: E0527 03:10:56.754515 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:56.757811 containerd[1550]: time="2025-05-27T03:10:56.757668851Z" level=info msg="CreateContainer within sandbox \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 03:10:56.776334 containerd[1550]: time="2025-05-27T03:10:56.776278062Z" level=info msg="Container 1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455: CDI devices from CRI Config.CDIDevices: []" May 27 03:10:56.780773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628588667.mount: Deactivated successfully. May 27 03:10:56.785178 containerd[1550]: time="2025-05-27T03:10:56.785105423Z" level=info msg="CreateContainer within sandbox \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\"" May 27 03:10:56.785843 containerd[1550]: time="2025-05-27T03:10:56.785795847Z" level=info msg="StartContainer for \"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\"" May 27 03:10:56.786926 containerd[1550]: time="2025-05-27T03:10:56.786891420Z" level=info msg="connecting to shim 1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455" address="unix:///run/containerd/s/0f234fd1dc170145eb09e600db4c755da1d5acab931a85c192e989971ff32cdd" protocol=ttrpc version=3 May 27 03:10:56.812003 systemd[1]: Started cri-containerd-1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455.scope - libcontainer container 1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455. May 27 03:10:56.853267 containerd[1550]: time="2025-05-27T03:10:56.853209950Z" level=info msg="StartContainer for \"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\" returns successfully" May 27 03:10:56.871767 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:10:56.872096 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:10:56.873374 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 03:10:56.875555 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:10:56.878361 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:10:56.879145 systemd[1]: cri-containerd-1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455.scope: Deactivated successfully. May 27 03:10:56.879436 containerd[1550]: time="2025-05-27T03:10:56.879396755Z" level=info msg="received exit event container_id:\"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\" id:\"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\" pid:3163 exited_at:{seconds:1748315456 nanos:878968672}" May 27 03:10:56.880075 containerd[1550]: time="2025-05-27T03:10:56.880018742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\" id:\"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\" pid:3163 exited_at:{seconds:1748315456 nanos:878968672}" May 27 03:10:56.907988 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:10:57.508623 containerd[1550]: time="2025-05-27T03:10:57.508545367Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:57.509438 containerd[1550]: time="2025-05-27T03:10:57.509398298Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 27 03:10:57.510710 containerd[1550]: time="2025-05-27T03:10:57.510665280Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:10:57.512048 containerd[1550]: time="2025-05-27T03:10:57.511992660Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.003558872s" May 27 03:10:57.512048 containerd[1550]: time="2025-05-27T03:10:57.512038724Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 27 03:10:57.515124 containerd[1550]: time="2025-05-27T03:10:57.515089775Z" level=info msg="CreateContainer within sandbox \"56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 03:10:57.526829 containerd[1550]: time="2025-05-27T03:10:57.526742140Z" level=info msg="Container 2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63: CDI devices from CRI Config.CDIDevices: []" May 27 03:10:57.534210 containerd[1550]: time="2025-05-27T03:10:57.534157117Z" level=info msg="CreateContainer within sandbox \"56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\"" May 27 03:10:57.535109 containerd[1550]: time="2025-05-27T03:10:57.535069242Z" level=info msg="StartContainer for \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\"" May 27 03:10:57.536159 containerd[1550]: time="2025-05-27T03:10:57.536136159Z" level=info msg="connecting to shim 2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63" address="unix:///run/containerd/s/9989cd07b0b01ebc61f136c4e0b542591b405e2b783c6f812c4a8715d250d624" protocol=ttrpc version=3 May 27 03:10:57.567002 systemd[1]: Started cri-containerd-2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63.scope - libcontainer container 2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63. May 27 03:10:57.601224 containerd[1550]: time="2025-05-27T03:10:57.601175796Z" level=info msg="StartContainer for \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\" returns successfully" May 27 03:10:57.765023 kubelet[2683]: E0527 03:10:57.762699 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:57.769464 kubelet[2683]: E0527 03:10:57.769414 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:57.775787 containerd[1550]: time="2025-05-27T03:10:57.775670090Z" level=info msg="CreateContainer within sandbox \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 03:10:57.780570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455-rootfs.mount: Deactivated successfully. May 27 03:10:57.864364 kubelet[2683]: I0527 03:10:57.864233 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-66b79" podStartSLOduration=1.242479388 podStartE2EDuration="12.843281116s" podCreationTimestamp="2025-05-27 03:10:45 +0000 UTC" firstStartedPulling="2025-05-27 03:10:45.912104499 +0000 UTC m=+6.338412515" lastFinishedPulling="2025-05-27 03:10:57.512906227 +0000 UTC m=+17.939214243" observedRunningTime="2025-05-27 03:10:57.842680811 +0000 UTC m=+18.268988847" watchObservedRunningTime="2025-05-27 03:10:57.843281116 +0000 UTC m=+18.269589132" May 27 03:10:57.885082 containerd[1550]: time="2025-05-27T03:10:57.885011962Z" level=info msg="Container 03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945: CDI devices from CRI Config.CDIDevices: []" May 27 03:10:57.902942 containerd[1550]: time="2025-05-27T03:10:57.902873852Z" level=info msg="CreateContainer within sandbox \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\"" May 27 03:10:57.903616 containerd[1550]: time="2025-05-27T03:10:57.903521937Z" level=info msg="StartContainer for \"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\"" May 27 03:10:57.905486 containerd[1550]: time="2025-05-27T03:10:57.905431020Z" level=info msg="connecting to shim 03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945" address="unix:///run/containerd/s/0f234fd1dc170145eb09e600db4c755da1d5acab931a85c192e989971ff32cdd" protocol=ttrpc version=3 May 27 03:10:57.939969 systemd[1]: Started cri-containerd-03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945.scope - libcontainer container 03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945. May 27 03:10:58.017471 containerd[1550]: time="2025-05-27T03:10:58.017333813Z" level=info msg="StartContainer for \"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\" returns successfully" May 27 03:10:58.021767 systemd[1]: cri-containerd-03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945.scope: Deactivated successfully. May 27 03:10:58.024669 containerd[1550]: time="2025-05-27T03:10:58.024631110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\" id:\"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\" pid:3262 exited_at:{seconds:1748315458 nanos:24351809}" May 27 03:10:58.025018 containerd[1550]: time="2025-05-27T03:10:58.024942392Z" level=info msg="received exit event container_id:\"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\" id:\"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\" pid:3262 exited_at:{seconds:1748315458 nanos:24351809}" May 27 03:10:58.055549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945-rootfs.mount: Deactivated successfully. May 27 03:10:58.774111 kubelet[2683]: E0527 03:10:58.774045 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:58.774860 kubelet[2683]: E0527 03:10:58.774303 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:58.776932 containerd[1550]: time="2025-05-27T03:10:58.776886571Z" level=info msg="CreateContainer within sandbox \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 03:10:58.791994 containerd[1550]: time="2025-05-27T03:10:58.791939990Z" level=info msg="Container a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd: CDI devices from CRI Config.CDIDevices: []" May 27 03:10:58.800814 containerd[1550]: time="2025-05-27T03:10:58.800769815Z" level=info msg="CreateContainer within sandbox \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\"" May 27 03:10:58.801338 containerd[1550]: time="2025-05-27T03:10:58.801289207Z" level=info msg="StartContainer for \"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\"" May 27 03:10:58.802838 containerd[1550]: time="2025-05-27T03:10:58.802809227Z" level=info msg="connecting to shim a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd" address="unix:///run/containerd/s/0f234fd1dc170145eb09e600db4c755da1d5acab931a85c192e989971ff32cdd" protocol=ttrpc version=3 May 27 03:10:58.822912 systemd[1]: Started cri-containerd-a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd.scope - libcontainer container a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd. May 27 03:10:58.854908 systemd[1]: cri-containerd-a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd.scope: Deactivated successfully. May 27 03:10:58.857705 containerd[1550]: time="2025-05-27T03:10:58.857663630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\" id:\"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\" pid:3302 exited_at:{seconds:1748315458 nanos:856440770}" May 27 03:10:58.859492 containerd[1550]: time="2025-05-27T03:10:58.856682305Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0e3475b_616a_479f_82ad_9ec90d101cdd.slice/cri-containerd-a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd.scope/memory.events\": no such file or directory" May 27 03:10:58.861216 containerd[1550]: time="2025-05-27T03:10:58.861147000Z" level=info msg="received exit event container_id:\"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\" id:\"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\" pid:3302 exited_at:{seconds:1748315458 nanos:856440770}" May 27 03:10:58.863823 containerd[1550]: time="2025-05-27T03:10:58.863758061Z" level=info msg="StartContainer for \"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\" returns successfully" May 27 03:10:58.890983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd-rootfs.mount: Deactivated successfully. May 27 03:10:59.779051 kubelet[2683]: E0527 03:10:59.779004 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:10:59.783062 containerd[1550]: time="2025-05-27T03:10:59.782999733Z" level=info msg="CreateContainer within sandbox \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 03:10:59.837916 containerd[1550]: time="2025-05-27T03:10:59.833982231Z" level=info msg="Container f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e: CDI devices from CRI Config.CDIDevices: []" May 27 03:10:59.886117 containerd[1550]: time="2025-05-27T03:10:59.885957491Z" level=info msg="CreateContainer within sandbox \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\"" May 27 03:10:59.887313 containerd[1550]: time="2025-05-27T03:10:59.887280444Z" level=info msg="StartContainer for \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\"" May 27 03:10:59.888971 containerd[1550]: time="2025-05-27T03:10:59.888923674Z" level=info msg="connecting to shim f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e" address="unix:///run/containerd/s/0f234fd1dc170145eb09e600db4c755da1d5acab931a85c192e989971ff32cdd" protocol=ttrpc version=3 May 27 03:10:59.918073 systemd[1]: Started cri-containerd-f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e.scope - libcontainer container f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e. May 27 03:10:59.965928 containerd[1550]: time="2025-05-27T03:10:59.965878468Z" level=info msg="StartContainer for \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\" returns successfully" May 27 03:11:00.074119 containerd[1550]: time="2025-05-27T03:11:00.073991091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\" id:\"cf3859321c8c5c220ef1d7a2f66fb0cf417c2c37087cd07e2c9a6926ae222656\" pid:3374 exited_at:{seconds:1748315460 nanos:73303693}" May 27 03:11:00.166879 kubelet[2683]: I0527 03:11:00.166829 2683 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:11:00.206057 systemd[1]: Created slice kubepods-burstable-pod0882235d_31dd_4635_a222_a0ff8b59624a.slice - libcontainer container kubepods-burstable-pod0882235d_31dd_4635_a222_a0ff8b59624a.slice. May 27 03:11:00.215805 systemd[1]: Created slice kubepods-burstable-podf151dbdd_88d0_4bd2_af84_8e423b06c5b7.slice - libcontainer container kubepods-burstable-podf151dbdd_88d0_4bd2_af84_8e423b06c5b7.slice. May 27 03:11:00.308736 kubelet[2683]: I0527 03:11:00.308665 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdbnn\" (UniqueName: \"kubernetes.io/projected/0882235d-31dd-4635-a222-a0ff8b59624a-kube-api-access-vdbnn\") pod \"coredns-668d6bf9bc-855nv\" (UID: \"0882235d-31dd-4635-a222-a0ff8b59624a\") " pod="kube-system/coredns-668d6bf9bc-855nv" May 27 03:11:00.308905 kubelet[2683]: I0527 03:11:00.308747 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f151dbdd-88d0-4bd2-af84-8e423b06c5b7-config-volume\") pod \"coredns-668d6bf9bc-k29k2\" (UID: \"f151dbdd-88d0-4bd2-af84-8e423b06c5b7\") " pod="kube-system/coredns-668d6bf9bc-k29k2" May 27 03:11:00.308905 kubelet[2683]: I0527 03:11:00.308781 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmbqv\" (UniqueName: \"kubernetes.io/projected/f151dbdd-88d0-4bd2-af84-8e423b06c5b7-kube-api-access-pmbqv\") pod \"coredns-668d6bf9bc-k29k2\" (UID: \"f151dbdd-88d0-4bd2-af84-8e423b06c5b7\") " pod="kube-system/coredns-668d6bf9bc-k29k2" May 27 03:11:00.308905 kubelet[2683]: I0527 03:11:00.308800 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0882235d-31dd-4635-a222-a0ff8b59624a-config-volume\") pod \"coredns-668d6bf9bc-855nv\" (UID: \"0882235d-31dd-4635-a222-a0ff8b59624a\") " pod="kube-system/coredns-668d6bf9bc-855nv" May 27 03:11:00.513059 kubelet[2683]: E0527 03:11:00.513014 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:00.519527 kubelet[2683]: E0527 03:11:00.519483 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:00.521192 containerd[1550]: time="2025-05-27T03:11:00.520220211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k29k2,Uid:f151dbdd-88d0-4bd2-af84-8e423b06c5b7,Namespace:kube-system,Attempt:0,}" May 27 03:11:00.523188 containerd[1550]: time="2025-05-27T03:11:00.523111580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-855nv,Uid:0882235d-31dd-4635-a222-a0ff8b59624a,Namespace:kube-system,Attempt:0,}" May 27 03:11:00.785306 kubelet[2683]: E0527 03:11:00.785141 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:00.800578 kubelet[2683]: I0527 03:11:00.800475 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8xbrq" podStartSLOduration=6.187844086 podStartE2EDuration="15.800455469s" podCreationTimestamp="2025-05-27 03:10:45 +0000 UTC" firstStartedPulling="2025-05-27 03:10:45.895657773 +0000 UTC m=+6.321965789" lastFinishedPulling="2025-05-27 03:10:55.508269146 +0000 UTC m=+15.934577172" observedRunningTime="2025-05-27 03:11:00.799840752 +0000 UTC m=+21.226148788" watchObservedRunningTime="2025-05-27 03:11:00.800455469 +0000 UTC m=+21.226763485" May 27 03:11:01.787212 kubelet[2683]: E0527 03:11:01.787176 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:02.187934 systemd-networkd[1488]: cilium_host: Link UP May 27 03:11:02.188127 systemd-networkd[1488]: cilium_net: Link UP May 27 03:11:02.188361 systemd-networkd[1488]: cilium_net: Gained carrier May 27 03:11:02.188583 systemd-networkd[1488]: cilium_host: Gained carrier May 27 03:11:02.297320 systemd-networkd[1488]: cilium_vxlan: Link UP May 27 03:11:02.297340 systemd-networkd[1488]: cilium_vxlan: Gained carrier May 27 03:11:02.524761 kernel: NET: Registered PF_ALG protocol family May 27 03:11:02.789056 kubelet[2683]: E0527 03:11:02.788915 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:03.068932 systemd-networkd[1488]: cilium_net: Gained IPv6LL May 27 03:11:03.133979 systemd-networkd[1488]: cilium_host: Gained IPv6LL May 27 03:11:03.231173 systemd-networkd[1488]: lxc_health: Link UP May 27 03:11:03.231477 systemd-networkd[1488]: lxc_health: Gained carrier May 27 03:11:03.576754 kernel: eth0: renamed from tmp3c735 May 27 03:11:03.585137 systemd-networkd[1488]: lxc9e083b77dadd: Link UP May 27 03:11:03.585512 systemd-networkd[1488]: lxc61545cb8c54e: Link UP May 27 03:11:03.588660 systemd-networkd[1488]: lxc9e083b77dadd: Gained carrier May 27 03:11:03.588801 kernel: eth0: renamed from tmp4ab14 May 27 03:11:03.591470 systemd-networkd[1488]: lxc61545cb8c54e: Gained carrier May 27 03:11:03.790690 kubelet[2683]: E0527 03:11:03.790633 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:03.838833 systemd-networkd[1488]: cilium_vxlan: Gained IPv6LL May 27 03:11:04.733033 systemd-networkd[1488]: lxc9e083b77dadd: Gained IPv6LL May 27 03:11:04.792489 kubelet[2683]: E0527 03:11:04.792433 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:04.860916 systemd-networkd[1488]: lxc_health: Gained IPv6LL May 27 03:11:05.565042 systemd-networkd[1488]: lxc61545cb8c54e: Gained IPv6LL May 27 03:11:05.794345 kubelet[2683]: E0527 03:11:05.794297 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:06.867229 systemd[1]: Started sshd@7-10.0.0.11:22-10.0.0.1:59934.service - OpenSSH per-connection server daemon (10.0.0.1:59934). May 27 03:11:06.930794 sshd[3844]: Accepted publickey for core from 10.0.0.1 port 59934 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:06.932328 sshd-session[3844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:06.938268 systemd-logind[1537]: New session 8 of user core. May 27 03:11:06.951017 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:11:07.179875 sshd[3846]: Connection closed by 10.0.0.1 port 59934 May 27 03:11:07.182912 sshd-session[3844]: pam_unix(sshd:session): session closed for user core May 27 03:11:07.187300 systemd-logind[1537]: Session 8 logged out. Waiting for processes to exit. May 27 03:11:07.189852 systemd[1]: sshd@7-10.0.0.11:22-10.0.0.1:59934.service: Deactivated successfully. May 27 03:11:07.193553 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:11:07.197231 systemd-logind[1537]: Removed session 8. May 27 03:11:07.352492 containerd[1550]: time="2025-05-27T03:11:07.352351527Z" level=info msg="connecting to shim 4ab14220da68a8e909d659b384b2c6d74d0e4e001912ef041923bf4badcb7fd3" address="unix:///run/containerd/s/36e936864ba3a49c232371f9db91356671b60e6274bb122e64e2f6a5007678ab" namespace=k8s.io protocol=ttrpc version=3 May 27 03:11:07.355168 containerd[1550]: time="2025-05-27T03:11:07.355103878Z" level=info msg="connecting to shim 3c735de19af7d32eb8e0441afe7a916155d019208db377c0b044aabaca5686da" address="unix:///run/containerd/s/f4f59b9989d0a12a27c7302828425d2214a9f731a2767a5be1adb53cd13e6ba7" namespace=k8s.io protocol=ttrpc version=3 May 27 03:11:07.395015 systemd[1]: Started cri-containerd-3c735de19af7d32eb8e0441afe7a916155d019208db377c0b044aabaca5686da.scope - libcontainer container 3c735de19af7d32eb8e0441afe7a916155d019208db377c0b044aabaca5686da. May 27 03:11:07.397229 systemd[1]: Started cri-containerd-4ab14220da68a8e909d659b384b2c6d74d0e4e001912ef041923bf4badcb7fd3.scope - libcontainer container 4ab14220da68a8e909d659b384b2c6d74d0e4e001912ef041923bf4badcb7fd3. May 27 03:11:07.414628 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:11:07.417263 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:11:07.451703 containerd[1550]: time="2025-05-27T03:11:07.451578403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k29k2,Uid:f151dbdd-88d0-4bd2-af84-8e423b06c5b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ab14220da68a8e909d659b384b2c6d74d0e4e001912ef041923bf4badcb7fd3\"" May 27 03:11:07.453607 containerd[1550]: time="2025-05-27T03:11:07.453367245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-855nv,Uid:0882235d-31dd-4635-a222-a0ff8b59624a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c735de19af7d32eb8e0441afe7a916155d019208db377c0b044aabaca5686da\"" May 27 03:11:07.455643 kubelet[2683]: E0527 03:11:07.455613 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:07.456088 kubelet[2683]: E0527 03:11:07.455950 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:07.458778 containerd[1550]: time="2025-05-27T03:11:07.458479859Z" level=info msg="CreateContainer within sandbox \"3c735de19af7d32eb8e0441afe7a916155d019208db377c0b044aabaca5686da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:11:07.459157 containerd[1550]: time="2025-05-27T03:11:07.459124195Z" level=info msg="CreateContainer within sandbox \"4ab14220da68a8e909d659b384b2c6d74d0e4e001912ef041923bf4badcb7fd3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:11:07.479745 containerd[1550]: time="2025-05-27T03:11:07.479144801Z" level=info msg="Container ece331255452f8d18c0dfa7698a184af8f701720664b62da3556456bc28907e3: CDI devices from CRI Config.CDIDevices: []" May 27 03:11:07.479745 containerd[1550]: time="2025-05-27T03:11:07.479189456Z" level=info msg="Container 7fbc5bb24c3c92f1f38d5cc25eccc965e1fbb37236dff32d0e4d3e48ca62aa22: CDI devices from CRI Config.CDIDevices: []" May 27 03:11:07.490754 containerd[1550]: time="2025-05-27T03:11:07.490675177Z" level=info msg="CreateContainer within sandbox \"4ab14220da68a8e909d659b384b2c6d74d0e4e001912ef041923bf4badcb7fd3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ece331255452f8d18c0dfa7698a184af8f701720664b62da3556456bc28907e3\"" May 27 03:11:07.491213 containerd[1550]: time="2025-05-27T03:11:07.491178402Z" level=info msg="StartContainer for \"ece331255452f8d18c0dfa7698a184af8f701720664b62da3556456bc28907e3\"" May 27 03:11:07.494017 containerd[1550]: time="2025-05-27T03:11:07.493976340Z" level=info msg="CreateContainer within sandbox \"3c735de19af7d32eb8e0441afe7a916155d019208db377c0b044aabaca5686da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7fbc5bb24c3c92f1f38d5cc25eccc965e1fbb37236dff32d0e4d3e48ca62aa22\"" May 27 03:11:07.494518 containerd[1550]: time="2025-05-27T03:11:07.494483322Z" level=info msg="StartContainer for \"7fbc5bb24c3c92f1f38d5cc25eccc965e1fbb37236dff32d0e4d3e48ca62aa22\"" May 27 03:11:07.495634 containerd[1550]: time="2025-05-27T03:11:07.495597243Z" level=info msg="connecting to shim 7fbc5bb24c3c92f1f38d5cc25eccc965e1fbb37236dff32d0e4d3e48ca62aa22" address="unix:///run/containerd/s/f4f59b9989d0a12a27c7302828425d2214a9f731a2767a5be1adb53cd13e6ba7" protocol=ttrpc version=3 May 27 03:11:07.504115 containerd[1550]: time="2025-05-27T03:11:07.504061718Z" level=info msg="connecting to shim ece331255452f8d18c0dfa7698a184af8f701720664b62da3556456bc28907e3" address="unix:///run/containerd/s/36e936864ba3a49c232371f9db91356671b60e6274bb122e64e2f6a5007678ab" protocol=ttrpc version=3 May 27 03:11:07.520913 systemd[1]: Started cri-containerd-7fbc5bb24c3c92f1f38d5cc25eccc965e1fbb37236dff32d0e4d3e48ca62aa22.scope - libcontainer container 7fbc5bb24c3c92f1f38d5cc25eccc965e1fbb37236dff32d0e4d3e48ca62aa22. May 27 03:11:07.525787 systemd[1]: Started cri-containerd-ece331255452f8d18c0dfa7698a184af8f701720664b62da3556456bc28907e3.scope - libcontainer container ece331255452f8d18c0dfa7698a184af8f701720664b62da3556456bc28907e3. May 27 03:11:07.575509 containerd[1550]: time="2025-05-27T03:11:07.575421704Z" level=info msg="StartContainer for \"ece331255452f8d18c0dfa7698a184af8f701720664b62da3556456bc28907e3\" returns successfully" May 27 03:11:07.575656 containerd[1550]: time="2025-05-27T03:11:07.575568218Z" level=info msg="StartContainer for \"7fbc5bb24c3c92f1f38d5cc25eccc965e1fbb37236dff32d0e4d3e48ca62aa22\" returns successfully" May 27 03:11:07.816231 kubelet[2683]: E0527 03:11:07.816194 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:07.839514 kubelet[2683]: E0527 03:11:07.839305 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:07.881727 kubelet[2683]: I0527 03:11:07.881638 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k29k2" podStartSLOduration=22.881607454 podStartE2EDuration="22.881607454s" podCreationTimestamp="2025-05-27 03:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:11:07.880697019 +0000 UTC m=+28.307005035" watchObservedRunningTime="2025-05-27 03:11:07.881607454 +0000 UTC m=+28.307915470" May 27 03:11:08.318999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240870588.mount: Deactivated successfully. May 27 03:11:08.848727 kubelet[2683]: E0527 03:11:08.848675 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:08.849146 kubelet[2683]: E0527 03:11:08.848874 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:09.851111 kubelet[2683]: E0527 03:11:09.851060 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:09.851111 kubelet[2683]: E0527 03:11:09.851118 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:12.196911 systemd[1]: Started sshd@8-10.0.0.11:22-10.0.0.1:59944.service - OpenSSH per-connection server daemon (10.0.0.1:59944). May 27 03:11:12.241445 sshd[4036]: Accepted publickey for core from 10.0.0.1 port 59944 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:12.243241 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:12.247836 systemd-logind[1537]: New session 9 of user core. May 27 03:11:12.257920 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:11:12.384747 sshd[4038]: Connection closed by 10.0.0.1 port 59944 May 27 03:11:12.385171 sshd-session[4036]: pam_unix(sshd:session): session closed for user core May 27 03:11:12.391006 systemd[1]: sshd@8-10.0.0.11:22-10.0.0.1:59944.service: Deactivated successfully. May 27 03:11:12.393358 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:11:12.394335 systemd-logind[1537]: Session 9 logged out. Waiting for processes to exit. May 27 03:11:12.395958 systemd-logind[1537]: Removed session 9. May 27 03:11:17.397951 systemd[1]: Started sshd@9-10.0.0.11:22-10.0.0.1:54258.service - OpenSSH per-connection server daemon (10.0.0.1:54258). May 27 03:11:17.446148 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 54258 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:17.447485 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:17.451771 systemd-logind[1537]: New session 10 of user core. May 27 03:11:17.462879 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:11:17.568949 sshd[4057]: Connection closed by 10.0.0.1 port 54258 May 27 03:11:17.569249 sshd-session[4055]: pam_unix(sshd:session): session closed for user core May 27 03:11:17.573160 systemd[1]: sshd@9-10.0.0.11:22-10.0.0.1:54258.service: Deactivated successfully. May 27 03:11:17.575130 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:11:17.575923 systemd-logind[1537]: Session 10 logged out. Waiting for processes to exit. May 27 03:11:17.577077 systemd-logind[1537]: Removed session 10. May 27 03:11:22.594942 systemd[1]: Started sshd@10-10.0.0.11:22-10.0.0.1:54272.service - OpenSSH per-connection server daemon (10.0.0.1:54272). May 27 03:11:22.647864 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 54272 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:22.649812 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:22.656225 systemd-logind[1537]: New session 11 of user core. May 27 03:11:22.664029 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:11:22.784431 sshd[4074]: Connection closed by 10.0.0.1 port 54272 May 27 03:11:22.784818 sshd-session[4072]: pam_unix(sshd:session): session closed for user core May 27 03:11:22.789310 systemd[1]: sshd@10-10.0.0.11:22-10.0.0.1:54272.service: Deactivated successfully. May 27 03:11:22.791387 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:11:22.792315 systemd-logind[1537]: Session 11 logged out. Waiting for processes to exit. May 27 03:11:22.793622 systemd-logind[1537]: Removed session 11. May 27 03:11:27.800705 systemd[1]: Started sshd@11-10.0.0.11:22-10.0.0.1:36028.service - OpenSSH per-connection server daemon (10.0.0.1:36028). May 27 03:11:27.857220 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 36028 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:27.859326 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:27.864508 systemd-logind[1537]: New session 12 of user core. May 27 03:11:27.875045 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:11:27.998165 sshd[4091]: Connection closed by 10.0.0.1 port 36028 May 27 03:11:27.998507 sshd-session[4089]: pam_unix(sshd:session): session closed for user core May 27 03:11:28.008699 systemd[1]: sshd@11-10.0.0.11:22-10.0.0.1:36028.service: Deactivated successfully. May 27 03:11:28.011145 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:11:28.012069 systemd-logind[1537]: Session 12 logged out. Waiting for processes to exit. May 27 03:11:28.015541 systemd[1]: Started sshd@12-10.0.0.11:22-10.0.0.1:36038.service - OpenSSH per-connection server daemon (10.0.0.1:36038). May 27 03:11:28.016230 systemd-logind[1537]: Removed session 12. May 27 03:11:28.077319 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 36038 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:28.079422 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:28.084809 systemd-logind[1537]: New session 13 of user core. May 27 03:11:28.099936 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:11:28.322412 sshd[4108]: Connection closed by 10.0.0.1 port 36038 May 27 03:11:28.322909 sshd-session[4106]: pam_unix(sshd:session): session closed for user core May 27 03:11:28.335115 systemd[1]: sshd@12-10.0.0.11:22-10.0.0.1:36038.service: Deactivated successfully. May 27 03:11:28.338478 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:11:28.340499 systemd-logind[1537]: Session 13 logged out. Waiting for processes to exit. May 27 03:11:28.345393 systemd[1]: Started sshd@13-10.0.0.11:22-10.0.0.1:36050.service - OpenSSH per-connection server daemon (10.0.0.1:36050). May 27 03:11:28.346365 systemd-logind[1537]: Removed session 13. May 27 03:11:28.399791 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 36050 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:28.401820 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:28.413967 systemd-logind[1537]: New session 14 of user core. May 27 03:11:28.423015 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:11:28.545348 sshd[4122]: Connection closed by 10.0.0.1 port 36050 May 27 03:11:28.545792 sshd-session[4120]: pam_unix(sshd:session): session closed for user core May 27 03:11:28.550451 systemd[1]: sshd@13-10.0.0.11:22-10.0.0.1:36050.service: Deactivated successfully. May 27 03:11:28.552666 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:11:28.554541 systemd-logind[1537]: Session 14 logged out. Waiting for processes to exit. May 27 03:11:28.557008 systemd-logind[1537]: Removed session 14. May 27 03:11:33.561666 systemd[1]: Started sshd@14-10.0.0.11:22-10.0.0.1:36644.service - OpenSSH per-connection server daemon (10.0.0.1:36644). May 27 03:11:33.621407 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 36644 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:33.623150 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:33.627551 systemd-logind[1537]: New session 15 of user core. May 27 03:11:33.635897 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:11:33.752038 sshd[4139]: Connection closed by 10.0.0.1 port 36644 May 27 03:11:33.752381 sshd-session[4137]: pam_unix(sshd:session): session closed for user core May 27 03:11:33.757248 systemd[1]: sshd@14-10.0.0.11:22-10.0.0.1:36644.service: Deactivated successfully. May 27 03:11:33.759407 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:11:33.760293 systemd-logind[1537]: Session 15 logged out. Waiting for processes to exit. May 27 03:11:33.761841 systemd-logind[1537]: Removed session 15. May 27 03:11:38.771256 systemd[1]: Started sshd@15-10.0.0.11:22-10.0.0.1:36648.service - OpenSSH per-connection server daemon (10.0.0.1:36648). May 27 03:11:38.824338 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 36648 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:38.826245 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:38.831259 systemd-logind[1537]: New session 16 of user core. May 27 03:11:38.840861 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:11:38.959818 sshd[4155]: Connection closed by 10.0.0.1 port 36648 May 27 03:11:38.960381 sshd-session[4153]: pam_unix(sshd:session): session closed for user core May 27 03:11:38.972069 systemd[1]: sshd@15-10.0.0.11:22-10.0.0.1:36648.service: Deactivated successfully. May 27 03:11:38.974373 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:11:38.975270 systemd-logind[1537]: Session 16 logged out. Waiting for processes to exit. May 27 03:11:38.978840 systemd[1]: Started sshd@16-10.0.0.11:22-10.0.0.1:36662.service - OpenSSH per-connection server daemon (10.0.0.1:36662). May 27 03:11:38.979522 systemd-logind[1537]: Removed session 16. May 27 03:11:39.036205 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 36662 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:39.038080 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:39.043166 systemd-logind[1537]: New session 17 of user core. May 27 03:11:39.050957 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:11:39.316419 sshd[4170]: Connection closed by 10.0.0.1 port 36662 May 27 03:11:39.316758 sshd-session[4168]: pam_unix(sshd:session): session closed for user core May 27 03:11:39.325687 systemd[1]: sshd@16-10.0.0.11:22-10.0.0.1:36662.service: Deactivated successfully. May 27 03:11:39.327770 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:11:39.328584 systemd-logind[1537]: Session 17 logged out. Waiting for processes to exit. May 27 03:11:39.331776 systemd[1]: Started sshd@17-10.0.0.11:22-10.0.0.1:36664.service - OpenSSH per-connection server daemon (10.0.0.1:36664). May 27 03:11:39.332431 systemd-logind[1537]: Removed session 17. May 27 03:11:39.395585 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 36664 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:39.397324 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:39.402743 systemd-logind[1537]: New session 18 of user core. May 27 03:11:39.414987 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:11:40.377106 sshd[4184]: Connection closed by 10.0.0.1 port 36664 May 27 03:11:40.377599 sshd-session[4182]: pam_unix(sshd:session): session closed for user core May 27 03:11:40.393936 systemd[1]: sshd@17-10.0.0.11:22-10.0.0.1:36664.service: Deactivated successfully. May 27 03:11:40.399167 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:11:40.405416 systemd-logind[1537]: Session 18 logged out. Waiting for processes to exit. May 27 03:11:40.410231 systemd-logind[1537]: Removed session 18. May 27 03:11:40.413300 systemd[1]: Started sshd@18-10.0.0.11:22-10.0.0.1:36672.service - OpenSSH per-connection server daemon (10.0.0.1:36672). May 27 03:11:40.468606 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 36672 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:40.470038 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:40.474906 systemd-logind[1537]: New session 19 of user core. May 27 03:11:40.488885 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:11:40.700435 sshd[4226]: Connection closed by 10.0.0.1 port 36672 May 27 03:11:40.701735 sshd-session[4224]: pam_unix(sshd:session): session closed for user core May 27 03:11:40.712997 systemd[1]: sshd@18-10.0.0.11:22-10.0.0.1:36672.service: Deactivated successfully. May 27 03:11:40.715566 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:11:40.716851 systemd-logind[1537]: Session 19 logged out. Waiting for processes to exit. May 27 03:11:40.720352 systemd[1]: Started sshd@19-10.0.0.11:22-10.0.0.1:36686.service - OpenSSH per-connection server daemon (10.0.0.1:36686). May 27 03:11:40.721216 systemd-logind[1537]: Removed session 19. May 27 03:11:40.768182 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 36686 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:40.769906 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:40.774338 systemd-logind[1537]: New session 20 of user core. May 27 03:11:40.783957 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:11:40.895301 sshd[4240]: Connection closed by 10.0.0.1 port 36686 May 27 03:11:40.895607 sshd-session[4238]: pam_unix(sshd:session): session closed for user core May 27 03:11:40.900372 systemd[1]: sshd@19-10.0.0.11:22-10.0.0.1:36686.service: Deactivated successfully. May 27 03:11:40.902697 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:11:40.903723 systemd-logind[1537]: Session 20 logged out. Waiting for processes to exit. May 27 03:11:40.905553 systemd-logind[1537]: Removed session 20. May 27 03:11:45.921262 systemd[1]: Started sshd@20-10.0.0.11:22-10.0.0.1:39730.service - OpenSSH per-connection server daemon (10.0.0.1:39730). May 27 03:11:45.989340 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 39730 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:45.991136 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:45.996864 systemd-logind[1537]: New session 21 of user core. May 27 03:11:46.007861 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:11:46.127341 sshd[4257]: Connection closed by 10.0.0.1 port 39730 May 27 03:11:46.127646 sshd-session[4255]: pam_unix(sshd:session): session closed for user core May 27 03:11:46.131002 systemd[1]: sshd@20-10.0.0.11:22-10.0.0.1:39730.service: Deactivated successfully. May 27 03:11:46.133043 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:11:46.134796 systemd-logind[1537]: Session 21 logged out. Waiting for processes to exit. May 27 03:11:46.136042 systemd-logind[1537]: Removed session 21. May 27 03:11:51.146442 systemd[1]: Started sshd@21-10.0.0.11:22-10.0.0.1:39746.service - OpenSSH per-connection server daemon (10.0.0.1:39746). May 27 03:11:51.195411 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 39746 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:51.197401 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:51.202632 systemd-logind[1537]: New session 22 of user core. May 27 03:11:51.208903 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:11:51.326327 sshd[4275]: Connection closed by 10.0.0.1 port 39746 May 27 03:11:51.326687 sshd-session[4273]: pam_unix(sshd:session): session closed for user core May 27 03:11:51.331156 systemd[1]: sshd@21-10.0.0.11:22-10.0.0.1:39746.service: Deactivated successfully. May 27 03:11:51.333582 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:11:51.334901 systemd-logind[1537]: Session 22 logged out. Waiting for processes to exit. May 27 03:11:51.336562 systemd-logind[1537]: Removed session 22. May 27 03:11:56.350902 systemd[1]: Started sshd@22-10.0.0.11:22-10.0.0.1:49672.service - OpenSSH per-connection server daemon (10.0.0.1:49672). May 27 03:11:56.405379 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 49672 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:11:56.407290 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:11:56.412017 systemd-logind[1537]: New session 23 of user core. May 27 03:11:56.421912 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:11:56.533360 sshd[4290]: Connection closed by 10.0.0.1 port 49672 May 27 03:11:56.533761 sshd-session[4288]: pam_unix(sshd:session): session closed for user core May 27 03:11:56.538251 systemd[1]: sshd@22-10.0.0.11:22-10.0.0.1:49672.service: Deactivated successfully. May 27 03:11:56.540582 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:11:56.541436 systemd-logind[1537]: Session 23 logged out. Waiting for processes to exit. May 27 03:11:56.542956 systemd-logind[1537]: Removed session 23. May 27 03:11:58.693899 kubelet[2683]: E0527 03:11:58.693825 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:11:59.693862 kubelet[2683]: E0527 03:11:59.693788 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:01.552821 systemd[1]: Started sshd@23-10.0.0.11:22-10.0.0.1:49674.service - OpenSSH per-connection server daemon (10.0.0.1:49674). May 27 03:12:01.607211 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 49674 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:12:01.609483 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:12:01.615381 systemd-logind[1537]: New session 24 of user core. May 27 03:12:01.627009 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:12:01.743393 sshd[4306]: Connection closed by 10.0.0.1 port 49674 May 27 03:12:01.743812 sshd-session[4304]: pam_unix(sshd:session): session closed for user core May 27 03:12:01.752659 systemd[1]: sshd@23-10.0.0.11:22-10.0.0.1:49674.service: Deactivated successfully. May 27 03:12:01.754582 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:12:01.755511 systemd-logind[1537]: Session 24 logged out. Waiting for processes to exit. May 27 03:12:01.758298 systemd[1]: Started sshd@24-10.0.0.11:22-10.0.0.1:49678.service - OpenSSH per-connection server daemon (10.0.0.1:49678). May 27 03:12:01.758966 systemd-logind[1537]: Removed session 24. May 27 03:12:01.809141 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 49678 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:12:01.810787 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:12:01.815503 systemd-logind[1537]: New session 25 of user core. May 27 03:12:01.824866 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:12:02.693451 kubelet[2683]: E0527 03:12:02.693383 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:03.360649 kubelet[2683]: I0527 03:12:03.359412 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-855nv" podStartSLOduration=78.359392726 podStartE2EDuration="1m18.359392726s" podCreationTimestamp="2025-05-27 03:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:11:07.916889327 +0000 UTC m=+28.343197363" watchObservedRunningTime="2025-05-27 03:12:03.359392726 +0000 UTC m=+83.785700742" May 27 03:12:03.385873 containerd[1550]: time="2025-05-27T03:12:03.385818738Z" level=info msg="StopContainer for \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\" with timeout 30 (s)" May 27 03:12:03.395967 containerd[1550]: time="2025-05-27T03:12:03.395918694Z" level=info msg="Stop container \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\" with signal terminated" May 27 03:12:03.409783 containerd[1550]: time="2025-05-27T03:12:03.409709165Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:12:03.411175 containerd[1550]: time="2025-05-27T03:12:03.411128900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\" id:\"443ad0cf737393d0b3fd5685cbc5437760303a2356058befe50e8bfe3ff65b11\" pid:4341 exited_at:{seconds:1748315523 nanos:410749884}" May 27 03:12:03.411599 systemd[1]: cri-containerd-2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63.scope: Deactivated successfully. May 27 03:12:03.413016 containerd[1550]: time="2025-05-27T03:12:03.412986210Z" level=info msg="StopContainer for \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\" with timeout 2 (s)" May 27 03:12:03.413303 containerd[1550]: time="2025-05-27T03:12:03.413276151Z" level=info msg="Stop container \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\" with signal terminated" May 27 03:12:03.414142 containerd[1550]: time="2025-05-27T03:12:03.414115053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\" id:\"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\" pid:3226 exited_at:{seconds:1748315523 nanos:413885275}" May 27 03:12:03.414207 containerd[1550]: time="2025-05-27T03:12:03.414174503Z" level=info msg="received exit event container_id:\"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\" id:\"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\" pid:3226 exited_at:{seconds:1748315523 nanos:413885275}" May 27 03:12:03.422067 systemd-networkd[1488]: lxc_health: Link DOWN May 27 03:12:03.422076 systemd-networkd[1488]: lxc_health: Lost carrier May 27 03:12:03.441267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63-rootfs.mount: Deactivated successfully. May 27 03:12:03.442123 systemd[1]: cri-containerd-f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e.scope: Deactivated successfully. May 27 03:12:03.444677 containerd[1550]: time="2025-05-27T03:12:03.444127696Z" level=info msg="received exit event container_id:\"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\" id:\"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\" pid:3339 exited_at:{seconds:1748315523 nanos:443929638}" May 27 03:12:03.444677 containerd[1550]: time="2025-05-27T03:12:03.444197516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\" id:\"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\" pid:3339 exited_at:{seconds:1748315523 nanos:443929638}" May 27 03:12:03.442653 systemd[1]: cri-containerd-f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e.scope: Consumed 7.015s CPU time, 122.5M memory peak, 216K read from disk, 13.3M written to disk. May 27 03:12:03.466888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e-rootfs.mount: Deactivated successfully. May 27 03:12:03.517498 containerd[1550]: time="2025-05-27T03:12:03.517442271Z" level=info msg="StopContainer for \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\" returns successfully" May 27 03:12:03.518137 containerd[1550]: time="2025-05-27T03:12:03.518113101Z" level=info msg="StopPodSandbox for \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\"" May 27 03:12:03.518215 containerd[1550]: time="2025-05-27T03:12:03.518170117Z" level=info msg="Container to stop \"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:12:03.518215 containerd[1550]: time="2025-05-27T03:12:03.518182660Z" level=info msg="Container to stop \"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:12:03.518215 containerd[1550]: time="2025-05-27T03:12:03.518190525Z" level=info msg="Container to stop \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:12:03.518215 containerd[1550]: time="2025-05-27T03:12:03.518198139Z" level=info msg="Container to stop \"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:12:03.518215 containerd[1550]: time="2025-05-27T03:12:03.518206305Z" level=info msg="Container to stop \"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:12:03.523064 containerd[1550]: time="2025-05-27T03:12:03.523027336Z" level=info msg="StopContainer for \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\" returns successfully" May 27 03:12:03.523579 containerd[1550]: time="2025-05-27T03:12:03.523552313Z" level=info msg="StopPodSandbox for \"56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a\"" May 27 03:12:03.523647 containerd[1550]: time="2025-05-27T03:12:03.523629267Z" level=info msg="Container to stop \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:12:03.526114 systemd[1]: cri-containerd-46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6.scope: Deactivated successfully. May 27 03:12:03.527269 containerd[1550]: time="2025-05-27T03:12:03.527193006Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" id:\"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" pid:2874 exit_status:137 exited_at:{seconds:1748315523 nanos:526821394}" May 27 03:12:03.532221 systemd[1]: cri-containerd-56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a.scope: Deactivated successfully. May 27 03:12:03.558929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a-rootfs.mount: Deactivated successfully. May 27 03:12:03.559075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6-rootfs.mount: Deactivated successfully. May 27 03:12:03.564901 containerd[1550]: time="2025-05-27T03:12:03.564868445Z" level=info msg="shim disconnected" id=46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6 namespace=k8s.io May 27 03:12:03.564901 containerd[1550]: time="2025-05-27T03:12:03.564897479Z" level=warning msg="cleaning up after shim disconnected" id=46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6 namespace=k8s.io May 27 03:12:03.590209 containerd[1550]: time="2025-05-27T03:12:03.564904573Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 03:12:03.590381 containerd[1550]: time="2025-05-27T03:12:03.565030306Z" level=info msg="shim disconnected" id=56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a namespace=k8s.io May 27 03:12:03.590381 containerd[1550]: time="2025-05-27T03:12:03.590354336Z" level=warning msg="cleaning up after shim disconnected" id=56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a namespace=k8s.io May 27 03:12:03.590381 containerd[1550]: time="2025-05-27T03:12:03.590364475Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 03:12:03.614629 containerd[1550]: time="2025-05-27T03:12:03.614388992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a\" id:\"56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a\" pid:2881 exit_status:137 exited_at:{seconds:1748315523 nanos:533492763}" May 27 03:12:03.617020 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6-shm.mount: Deactivated successfully. May 27 03:12:03.625177 containerd[1550]: time="2025-05-27T03:12:03.625105877Z" level=info msg="received exit event sandbox_id:\"56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a\" exit_status:137 exited_at:{seconds:1748315523 nanos:533492763}" May 27 03:12:03.626021 containerd[1550]: time="2025-05-27T03:12:03.625358047Z" level=info msg="received exit event sandbox_id:\"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" exit_status:137 exited_at:{seconds:1748315523 nanos:526821394}" May 27 03:12:03.626021 containerd[1550]: time="2025-05-27T03:12:03.625882715Z" level=info msg="TearDown network for sandbox \"56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a\" successfully" May 27 03:12:03.626021 containerd[1550]: time="2025-05-27T03:12:03.625909023Z" level=info msg="StopPodSandbox for \"56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a\" returns successfully" May 27 03:12:03.626856 containerd[1550]: time="2025-05-27T03:12:03.626821203Z" level=info msg="TearDown network for sandbox \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" successfully" May 27 03:12:03.626856 containerd[1550]: time="2025-05-27T03:12:03.626853653Z" level=info msg="StopPodSandbox for \"46634ed8e94667802409906765f718d55ff5438587a279b12336471a3d8009b6\" returns successfully" May 27 03:12:03.722337 kubelet[2683]: I0527 03:12:03.722277 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0e3475b-616a-479f-82ad-9ec90d101cdd-cilium-config-path\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.722337 kubelet[2683]: I0527 03:12:03.722328 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-cilium-cgroup\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.722337 kubelet[2683]: I0527 03:12:03.722352 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0e3475b-616a-479f-82ad-9ec90d101cdd-hubble-tls\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.722953 kubelet[2683]: I0527 03:12:03.722367 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-host-proc-sys-net\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.722953 kubelet[2683]: I0527 03:12:03.722386 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62xpf\" (UniqueName: \"kubernetes.io/projected/b0e3475b-616a-479f-82ad-9ec90d101cdd-kube-api-access-62xpf\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.722953 kubelet[2683]: I0527 03:12:03.722402 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-etc-cni-netd\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.722953 kubelet[2683]: I0527 03:12:03.722417 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9ghr\" (UniqueName: \"kubernetes.io/projected/647caad1-1951-4802-a9e8-e135e8876471-kube-api-access-d9ghr\") pod \"647caad1-1951-4802-a9e8-e135e8876471\" (UID: \"647caad1-1951-4802-a9e8-e135e8876471\") " May 27 03:12:03.722953 kubelet[2683]: I0527 03:12:03.722431 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-hostproc\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.722953 kubelet[2683]: I0527 03:12:03.722446 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/647caad1-1951-4802-a9e8-e135e8876471-cilium-config-path\") pod \"647caad1-1951-4802-a9e8-e135e8876471\" (UID: \"647caad1-1951-4802-a9e8-e135e8876471\") " May 27 03:12:03.723147 kubelet[2683]: I0527 03:12:03.722459 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-lib-modules\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.723147 kubelet[2683]: I0527 03:12:03.722485 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-bpf-maps\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.723147 kubelet[2683]: I0527 03:12:03.722506 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-cni-path\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.723147 kubelet[2683]: I0527 03:12:03.722526 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-host-proc-sys-kernel\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.723147 kubelet[2683]: I0527 03:12:03.722539 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-xtables-lock\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.723147 kubelet[2683]: I0527 03:12:03.722552 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-cilium-run\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.723341 kubelet[2683]: I0527 03:12:03.722567 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0e3475b-616a-479f-82ad-9ec90d101cdd-clustermesh-secrets\") pod \"b0e3475b-616a-479f-82ad-9ec90d101cdd\" (UID: \"b0e3475b-616a-479f-82ad-9ec90d101cdd\") " May 27 03:12:03.724574 kubelet[2683]: I0527 03:12:03.724451 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:12:03.724653 kubelet[2683]: I0527 03:12:03.724554 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:12:03.724783 kubelet[2683]: I0527 03:12:03.724765 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-cni-path" (OuterVolumeSpecName: "cni-path") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:12:03.724931 kubelet[2683]: I0527 03:12:03.724874 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:12:03.724931 kubelet[2683]: I0527 03:12:03.724904 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:12:03.725114 kubelet[2683]: I0527 03:12:03.725051 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:12:03.725114 kubelet[2683]: I0527 03:12:03.725082 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:12:03.725224 kubelet[2683]: I0527 03:12:03.725207 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:12:03.725794 kubelet[2683]: I0527 03:12:03.725764 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:12:03.725872 kubelet[2683]: I0527 03:12:03.725795 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0e3475b-616a-479f-82ad-9ec90d101cdd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:12:03.725872 kubelet[2683]: I0527 03:12:03.725853 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-hostproc" (OuterVolumeSpecName: "hostproc") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:12:03.727705 kubelet[2683]: I0527 03:12:03.727670 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647caad1-1951-4802-a9e8-e135e8876471-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "647caad1-1951-4802-a9e8-e135e8876471" (UID: "647caad1-1951-4802-a9e8-e135e8876471"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:12:03.727882 kubelet[2683]: I0527 03:12:03.727814 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0e3475b-616a-479f-82ad-9ec90d101cdd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:12:03.728209 kubelet[2683]: I0527 03:12:03.728175 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0e3475b-616a-479f-82ad-9ec90d101cdd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:12:03.729228 kubelet[2683]: I0527 03:12:03.729177 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647caad1-1951-4802-a9e8-e135e8876471-kube-api-access-d9ghr" (OuterVolumeSpecName: "kube-api-access-d9ghr") pod "647caad1-1951-4802-a9e8-e135e8876471" (UID: "647caad1-1951-4802-a9e8-e135e8876471"). InnerVolumeSpecName "kube-api-access-d9ghr". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:12:03.729584 kubelet[2683]: I0527 03:12:03.729556 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0e3475b-616a-479f-82ad-9ec90d101cdd-kube-api-access-62xpf" (OuterVolumeSpecName: "kube-api-access-62xpf") pod "b0e3475b-616a-479f-82ad-9ec90d101cdd" (UID: "b0e3475b-616a-479f-82ad-9ec90d101cdd"). InnerVolumeSpecName "kube-api-access-62xpf". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:12:03.823571 kubelet[2683]: I0527 03:12:03.823504 2683 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.823571 kubelet[2683]: I0527 03:12:03.823552 2683 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-62xpf\" (UniqueName: \"kubernetes.io/projected/b0e3475b-616a-479f-82ad-9ec90d101cdd-kube-api-access-62xpf\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.823571 kubelet[2683]: I0527 03:12:03.823566 2683 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.823571 kubelet[2683]: I0527 03:12:03.823579 2683 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d9ghr\" (UniqueName: \"kubernetes.io/projected/647caad1-1951-4802-a9e8-e135e8876471-kube-api-access-d9ghr\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.823571 kubelet[2683]: I0527 03:12:03.823591 2683 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/647caad1-1951-4802-a9e8-e135e8876471-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.823571 kubelet[2683]: I0527 03:12:03.823601 2683 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-hostproc\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.823933 kubelet[2683]: I0527 03:12:03.823612 2683 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-lib-modules\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.823933 kubelet[2683]: I0527 03:12:03.823623 2683 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.823933 kubelet[2683]: I0527 03:12:03.823633 2683 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-cni-path\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.823933 kubelet[2683]: I0527 03:12:03.823643 2683 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.823933 kubelet[2683]: I0527 03:12:03.823653 2683 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.823933 kubelet[2683]: I0527 03:12:03.823663 2683 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-cilium-run\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.823933 kubelet[2683]: I0527 03:12:03.823673 2683 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0e3475b-616a-479f-82ad-9ec90d101cdd-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.823933 kubelet[2683]: I0527 03:12:03.823683 2683 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0e3475b-616a-479f-82ad-9ec90d101cdd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.824154 kubelet[2683]: I0527 03:12:03.823704 2683 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0e3475b-616a-479f-82ad-9ec90d101cdd-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.824154 kubelet[2683]: I0527 03:12:03.823744 2683 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0e3475b-616a-479f-82ad-9ec90d101cdd-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 27 03:12:03.959224 kubelet[2683]: I0527 03:12:03.959099 2683 scope.go:117] "RemoveContainer" containerID="2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63" May 27 03:12:03.962212 containerd[1550]: time="2025-05-27T03:12:03.962162080Z" level=info msg="RemoveContainer for \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\"" May 27 03:12:03.969067 systemd[1]: Removed slice kubepods-besteffort-pod647caad1_1951_4802_a9e8_e135e8876471.slice - libcontainer container kubepods-besteffort-pod647caad1_1951_4802_a9e8_e135e8876471.slice. May 27 03:12:03.974791 systemd[1]: Removed slice kubepods-burstable-podb0e3475b_616a_479f_82ad_9ec90d101cdd.slice - libcontainer container kubepods-burstable-podb0e3475b_616a_479f_82ad_9ec90d101cdd.slice. May 27 03:12:03.975002 systemd[1]: kubepods-burstable-podb0e3475b_616a_479f_82ad_9ec90d101cdd.slice: Consumed 7.150s CPU time, 122.9M memory peak, 220K read from disk, 13.3M written to disk. May 27 03:12:04.031491 containerd[1550]: time="2025-05-27T03:12:04.031417023Z" level=info msg="RemoveContainer for \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\" returns successfully" May 27 03:12:04.035300 kubelet[2683]: I0527 03:12:04.035254 2683 scope.go:117] "RemoveContainer" containerID="2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63" May 27 03:12:04.035637 containerd[1550]: time="2025-05-27T03:12:04.035558200Z" level=error msg="ContainerStatus for \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\": not found" May 27 03:12:04.039164 kubelet[2683]: E0527 03:12:04.039128 2683 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\": not found" containerID="2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63" May 27 03:12:04.039319 kubelet[2683]: I0527 03:12:04.039163 2683 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63"} err="failed to get container status \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c64fb614151588ee0c01941ab5d0a0c75e58beb6ac75ada29e15575b2b2ea63\": not found" May 27 03:12:04.039319 kubelet[2683]: I0527 03:12:04.039237 2683 scope.go:117] "RemoveContainer" containerID="f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e" May 27 03:12:04.041229 containerd[1550]: time="2025-05-27T03:12:04.041199347Z" level=info msg="RemoveContainer for \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\"" May 27 03:12:04.149016 containerd[1550]: time="2025-05-27T03:12:04.148948345Z" level=info msg="RemoveContainer for \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\" returns successfully" May 27 03:12:04.149302 kubelet[2683]: I0527 03:12:04.149266 2683 scope.go:117] "RemoveContainer" containerID="a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd" May 27 03:12:04.151144 containerd[1550]: time="2025-05-27T03:12:04.151096143Z" level=info msg="RemoveContainer for \"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\"" May 27 03:12:04.257303 containerd[1550]: time="2025-05-27T03:12:04.257254232Z" level=info msg="RemoveContainer for \"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\" returns successfully" May 27 03:12:04.257590 kubelet[2683]: I0527 03:12:04.257528 2683 scope.go:117] "RemoveContainer" containerID="03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945" May 27 03:12:04.260280 containerd[1550]: time="2025-05-27T03:12:04.260240795Z" level=info msg="RemoveContainer for \"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\"" May 27 03:12:04.346364 containerd[1550]: time="2025-05-27T03:12:04.346284900Z" level=info msg="RemoveContainer for \"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\" returns successfully" May 27 03:12:04.346652 kubelet[2683]: I0527 03:12:04.346601 2683 scope.go:117] "RemoveContainer" containerID="1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455" May 27 03:12:04.348433 containerd[1550]: time="2025-05-27T03:12:04.348367616Z" level=info msg="RemoveContainer for \"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\"" May 27 03:12:04.441050 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56f5049137a1d5448f7ceb83bb862ad1dc0ff44b7d25519d3d74163510fec04a-shm.mount: Deactivated successfully. May 27 03:12:04.441185 systemd[1]: var-lib-kubelet-pods-647caad1\x2d1951\x2d4802\x2da9e8\x2de135e8876471-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd9ghr.mount: Deactivated successfully. May 27 03:12:04.441265 systemd[1]: var-lib-kubelet-pods-b0e3475b\x2d616a\x2d479f\x2d82ad\x2d9ec90d101cdd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d62xpf.mount: Deactivated successfully. May 27 03:12:04.441364 systemd[1]: var-lib-kubelet-pods-b0e3475b\x2d616a\x2d479f\x2d82ad\x2d9ec90d101cdd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 03:12:04.441436 systemd[1]: var-lib-kubelet-pods-b0e3475b\x2d616a\x2d479f\x2d82ad\x2d9ec90d101cdd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 03:12:04.445595 containerd[1550]: time="2025-05-27T03:12:04.445552199Z" level=info msg="RemoveContainer for \"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\" returns successfully" May 27 03:12:04.445893 kubelet[2683]: I0527 03:12:04.445858 2683 scope.go:117] "RemoveContainer" containerID="9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1" May 27 03:12:04.447253 containerd[1550]: time="2025-05-27T03:12:04.447210825Z" level=info msg="RemoveContainer for \"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\"" May 27 03:12:04.556940 containerd[1550]: time="2025-05-27T03:12:04.556767015Z" level=info msg="RemoveContainer for \"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\" returns successfully" May 27 03:12:04.557159 kubelet[2683]: I0527 03:12:04.557089 2683 scope.go:117] "RemoveContainer" containerID="f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e" May 27 03:12:04.557744 containerd[1550]: time="2025-05-27T03:12:04.557658268Z" level=error msg="ContainerStatus for \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\": not found" May 27 03:12:04.557966 kubelet[2683]: E0527 03:12:04.557935 2683 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\": not found" containerID="f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e" May 27 03:12:04.558016 kubelet[2683]: I0527 03:12:04.557981 2683 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e"} err="failed to get container status \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f59e2b52eb12b054069c843168e9515d201f4d396aed46a54bac4ed9c6eb911e\": not found" May 27 03:12:04.558044 kubelet[2683]: I0527 03:12:04.558014 2683 scope.go:117] "RemoveContainer" containerID="a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd" May 27 03:12:04.558417 containerd[1550]: time="2025-05-27T03:12:04.558365738Z" level=error msg="ContainerStatus for \"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\": not found" May 27 03:12:04.558590 kubelet[2683]: E0527 03:12:04.558544 2683 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\": not found" containerID="a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd" May 27 03:12:04.558590 kubelet[2683]: I0527 03:12:04.558571 2683 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd"} err="failed to get container status \"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\": rpc error: code = NotFound desc = an error occurred when try to find container \"a669359e46dcd344de0f15981fad6e07a58042daab43e382ffe3f3c9faf17abd\": not found" May 27 03:12:04.558590 kubelet[2683]: I0527 03:12:04.558588 2683 scope.go:117] "RemoveContainer" containerID="03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945" May 27 03:12:04.558886 containerd[1550]: time="2025-05-27T03:12:04.558826478Z" level=error msg="ContainerStatus for \"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\": not found" May 27 03:12:04.559043 kubelet[2683]: E0527 03:12:04.559016 2683 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\": not found" containerID="03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945" May 27 03:12:04.559043 kubelet[2683]: I0527 03:12:04.559036 2683 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945"} err="failed to get container status \"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\": rpc error: code = NotFound desc = an error occurred when try to find container \"03e3094e3a765dc8b87da820dbe44aecf48432a58923207c69fe67470adb2945\": not found" May 27 03:12:04.559043 kubelet[2683]: I0527 03:12:04.559049 2683 scope.go:117] "RemoveContainer" containerID="1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455" May 27 03:12:04.559287 containerd[1550]: time="2025-05-27T03:12:04.559246612Z" level=error msg="ContainerStatus for \"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\": not found" May 27 03:12:04.559467 kubelet[2683]: E0527 03:12:04.559403 2683 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\": not found" containerID="1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455" May 27 03:12:04.559467 kubelet[2683]: I0527 03:12:04.559434 2683 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455"} err="failed to get container status \"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\": rpc error: code = NotFound desc = an error occurred when try to find container \"1337211d15aa76c915a2f64d97ec2bae4aaba1a861eb83940ed78e709f182455\": not found" May 27 03:12:04.559467 kubelet[2683]: I0527 03:12:04.559465 2683 scope.go:117] "RemoveContainer" containerID="9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1" May 27 03:12:04.559757 containerd[1550]: time="2025-05-27T03:12:04.559618356Z" level=error msg="ContainerStatus for \"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\": not found" May 27 03:12:04.559993 kubelet[2683]: E0527 03:12:04.559912 2683 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\": not found" containerID="9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1" May 27 03:12:04.559993 kubelet[2683]: I0527 03:12:04.559954 2683 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1"} err="failed to get container status \"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ace22013a60c9c0805a00277d8a47d4c41e5f78cb970679639720f64d609be1\": not found" May 27 03:12:04.764565 kubelet[2683]: E0527 03:12:04.764516 2683 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 03:12:05.175097 sshd[4321]: Connection closed by 10.0.0.1 port 49678 May 27 03:12:05.175818 sshd-session[4319]: pam_unix(sshd:session): session closed for user core May 27 03:12:05.189883 systemd[1]: sshd@24-10.0.0.11:22-10.0.0.1:49678.service: Deactivated successfully. May 27 03:12:05.192703 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:12:05.193729 systemd-logind[1537]: Session 25 logged out. Waiting for processes to exit. May 27 03:12:05.197580 systemd[1]: Started sshd@25-10.0.0.11:22-10.0.0.1:39108.service - OpenSSH per-connection server daemon (10.0.0.1:39108). May 27 03:12:05.198696 systemd-logind[1537]: Removed session 25. May 27 03:12:05.254067 sshd[4474]: Accepted publickey for core from 10.0.0.1 port 39108 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:12:05.256127 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:12:05.261514 systemd-logind[1537]: New session 26 of user core. May 27 03:12:05.274977 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 03:12:05.696052 kubelet[2683]: I0527 03:12:05.695999 2683 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="647caad1-1951-4802-a9e8-e135e8876471" path="/var/lib/kubelet/pods/647caad1-1951-4802-a9e8-e135e8876471/volumes" May 27 03:12:05.696839 kubelet[2683]: I0527 03:12:05.696812 2683 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0e3475b-616a-479f-82ad-9ec90d101cdd" path="/var/lib/kubelet/pods/b0e3475b-616a-479f-82ad-9ec90d101cdd/volumes" May 27 03:12:05.759987 sshd[4476]: Connection closed by 10.0.0.1 port 39108 May 27 03:12:05.761478 sshd-session[4474]: pam_unix(sshd:session): session closed for user core May 27 03:12:05.770953 systemd[1]: sshd@25-10.0.0.11:22-10.0.0.1:39108.service: Deactivated successfully. May 27 03:12:05.773388 systemd[1]: session-26.scope: Deactivated successfully. May 27 03:12:05.774643 systemd-logind[1537]: Session 26 logged out. Waiting for processes to exit. May 27 03:12:05.778998 systemd-logind[1537]: Removed session 26. May 27 03:12:05.782086 systemd[1]: Started sshd@26-10.0.0.11:22-10.0.0.1:39116.service - OpenSSH per-connection server daemon (10.0.0.1:39116). May 27 03:12:05.799027 kubelet[2683]: I0527 03:12:05.798981 2683 memory_manager.go:355] "RemoveStaleState removing state" podUID="647caad1-1951-4802-a9e8-e135e8876471" containerName="cilium-operator" May 27 03:12:05.799027 kubelet[2683]: I0527 03:12:05.799012 2683 memory_manager.go:355] "RemoveStaleState removing state" podUID="b0e3475b-616a-479f-82ad-9ec90d101cdd" containerName="cilium-agent" May 27 03:12:05.816763 systemd[1]: Created slice kubepods-burstable-pod7ef83aa7_6c1e_4c11_bf3f_a22484280fc5.slice - libcontainer container kubepods-burstable-pod7ef83aa7_6c1e_4c11_bf3f_a22484280fc5.slice. May 27 03:12:05.838407 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 39116 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:12:05.840804 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:12:05.845761 systemd-logind[1537]: New session 27 of user core. May 27 03:12:05.855847 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 03:12:05.909689 sshd[4490]: Connection closed by 10.0.0.1 port 39116 May 27 03:12:05.910222 sshd-session[4488]: pam_unix(sshd:session): session closed for user core May 27 03:12:05.923812 systemd[1]: sshd@26-10.0.0.11:22-10.0.0.1:39116.service: Deactivated successfully. May 27 03:12:05.926530 systemd[1]: session-27.scope: Deactivated successfully. May 27 03:12:05.927440 systemd-logind[1537]: Session 27 logged out. Waiting for processes to exit. May 27 03:12:05.932363 systemd[1]: Started sshd@27-10.0.0.11:22-10.0.0.1:39124.service - OpenSSH per-connection server daemon (10.0.0.1:39124). May 27 03:12:05.933061 systemd-logind[1537]: Removed session 27. May 27 03:12:05.938208 kubelet[2683]: I0527 03:12:05.938161 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-cilium-cgroup\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938208 kubelet[2683]: I0527 03:12:05.938198 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-cni-path\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938208 kubelet[2683]: I0527 03:12:05.938216 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-etc-cni-netd\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938208 kubelet[2683]: I0527 03:12:05.938229 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-lib-modules\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938452 kubelet[2683]: I0527 03:12:05.938246 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-clustermesh-secrets\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938452 kubelet[2683]: I0527 03:12:05.938273 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-cilium-ipsec-secrets\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938452 kubelet[2683]: I0527 03:12:05.938410 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-cilium-config-path\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938533 kubelet[2683]: I0527 03:12:05.938489 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-host-proc-sys-kernel\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938533 kubelet[2683]: I0527 03:12:05.938525 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-cilium-run\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938586 kubelet[2683]: I0527 03:12:05.938572 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-bpf-maps\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938612 kubelet[2683]: I0527 03:12:05.938598 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-host-proc-sys-net\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938641 kubelet[2683]: I0527 03:12:05.938621 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-hubble-tls\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938672 kubelet[2683]: I0527 03:12:05.938643 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-xtables-lock\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938672 kubelet[2683]: I0527 03:12:05.938667 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-hostproc\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.938738 kubelet[2683]: I0527 03:12:05.938700 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtxrk\" (UniqueName: \"kubernetes.io/projected/7ef83aa7-6c1e-4c11-bf3f-a22484280fc5-kube-api-access-xtxrk\") pod \"cilium-mwrhq\" (UID: \"7ef83aa7-6c1e-4c11-bf3f-a22484280fc5\") " pod="kube-system/cilium-mwrhq" May 27 03:12:05.986261 sshd[4497]: Accepted publickey for core from 10.0.0.1 port 39124 ssh2: RSA SHA256:RIzveOASzKxUpo7e2hU2FnoYolpMQWvDzgVWbpJtJr0 May 27 03:12:05.988260 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:12:05.993190 systemd-logind[1537]: New session 28 of user core. May 27 03:12:06.002855 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 03:12:06.126202 kubelet[2683]: E0527 03:12:06.126149 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:06.126886 containerd[1550]: time="2025-05-27T03:12:06.126840877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mwrhq,Uid:7ef83aa7-6c1e-4c11-bf3f-a22484280fc5,Namespace:kube-system,Attempt:0,}" May 27 03:12:06.146860 containerd[1550]: time="2025-05-27T03:12:06.146803897Z" level=info msg="connecting to shim c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc" address="unix:///run/containerd/s/2fd312128dd8361f2eba7500a6062d01662db79aa31df401fb3c1be606ee9a46" namespace=k8s.io protocol=ttrpc version=3 May 27 03:12:06.180065 systemd[1]: Started cri-containerd-c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc.scope - libcontainer container c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc. May 27 03:12:06.216415 containerd[1550]: time="2025-05-27T03:12:06.216361455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mwrhq,Uid:7ef83aa7-6c1e-4c11-bf3f-a22484280fc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc\"" May 27 03:12:06.217451 kubelet[2683]: E0527 03:12:06.217423 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:06.221287 containerd[1550]: time="2025-05-27T03:12:06.221232253Z" level=info msg="CreateContainer within sandbox \"c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 03:12:06.229296 containerd[1550]: time="2025-05-27T03:12:06.229261073Z" level=info msg="Container 41ecb3b40d17d6feeccef5e344cfd5f7c855a0321e7fbe3a4b135c9fda179cb3: CDI devices from CRI Config.CDIDevices: []" May 27 03:12:06.238066 containerd[1550]: time="2025-05-27T03:12:06.237950108Z" level=info msg="CreateContainer within sandbox \"c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"41ecb3b40d17d6feeccef5e344cfd5f7c855a0321e7fbe3a4b135c9fda179cb3\"" May 27 03:12:06.238700 containerd[1550]: time="2025-05-27T03:12:06.238671339Z" level=info msg="StartContainer for \"41ecb3b40d17d6feeccef5e344cfd5f7c855a0321e7fbe3a4b135c9fda179cb3\"" May 27 03:12:06.239492 containerd[1550]: time="2025-05-27T03:12:06.239470074Z" level=info msg="connecting to shim 41ecb3b40d17d6feeccef5e344cfd5f7c855a0321e7fbe3a4b135c9fda179cb3" address="unix:///run/containerd/s/2fd312128dd8361f2eba7500a6062d01662db79aa31df401fb3c1be606ee9a46" protocol=ttrpc version=3 May 27 03:12:06.267003 systemd[1]: Started cri-containerd-41ecb3b40d17d6feeccef5e344cfd5f7c855a0321e7fbe3a4b135c9fda179cb3.scope - libcontainer container 41ecb3b40d17d6feeccef5e344cfd5f7c855a0321e7fbe3a4b135c9fda179cb3. May 27 03:12:06.303072 containerd[1550]: time="2025-05-27T03:12:06.303013223Z" level=info msg="StartContainer for \"41ecb3b40d17d6feeccef5e344cfd5f7c855a0321e7fbe3a4b135c9fda179cb3\" returns successfully" May 27 03:12:06.313265 systemd[1]: cri-containerd-41ecb3b40d17d6feeccef5e344cfd5f7c855a0321e7fbe3a4b135c9fda179cb3.scope: Deactivated successfully. May 27 03:12:06.315079 containerd[1550]: time="2025-05-27T03:12:06.315021011Z" level=info msg="received exit event container_id:\"41ecb3b40d17d6feeccef5e344cfd5f7c855a0321e7fbe3a4b135c9fda179cb3\" id:\"41ecb3b40d17d6feeccef5e344cfd5f7c855a0321e7fbe3a4b135c9fda179cb3\" pid:4571 exited_at:{seconds:1748315526 nanos:314563114}" May 27 03:12:06.315766 containerd[1550]: time="2025-05-27T03:12:06.315740729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41ecb3b40d17d6feeccef5e344cfd5f7c855a0321e7fbe3a4b135c9fda179cb3\" id:\"41ecb3b40d17d6feeccef5e344cfd5f7c855a0321e7fbe3a4b135c9fda179cb3\" pid:4571 exited_at:{seconds:1748315526 nanos:314563114}" May 27 03:12:06.978356 kubelet[2683]: E0527 03:12:06.978322 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:06.980126 containerd[1550]: time="2025-05-27T03:12:06.980073901Z" level=info msg="CreateContainer within sandbox \"c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 03:12:06.988591 containerd[1550]: time="2025-05-27T03:12:06.988531623Z" level=info msg="Container 7a5c78610586062bf2f465eb9806f84067acb18e7cee299a1da1ea349e7447f4: CDI devices from CRI Config.CDIDevices: []" May 27 03:12:06.996676 containerd[1550]: time="2025-05-27T03:12:06.996615065Z" level=info msg="CreateContainer within sandbox \"c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7a5c78610586062bf2f465eb9806f84067acb18e7cee299a1da1ea349e7447f4\"" May 27 03:12:06.997260 containerd[1550]: time="2025-05-27T03:12:06.997213385Z" level=info msg="StartContainer for \"7a5c78610586062bf2f465eb9806f84067acb18e7cee299a1da1ea349e7447f4\"" May 27 03:12:06.998140 containerd[1550]: time="2025-05-27T03:12:06.998116676Z" level=info msg="connecting to shim 7a5c78610586062bf2f465eb9806f84067acb18e7cee299a1da1ea349e7447f4" address="unix:///run/containerd/s/2fd312128dd8361f2eba7500a6062d01662db79aa31df401fb3c1be606ee9a46" protocol=ttrpc version=3 May 27 03:12:07.025086 systemd[1]: Started cri-containerd-7a5c78610586062bf2f465eb9806f84067acb18e7cee299a1da1ea349e7447f4.scope - libcontainer container 7a5c78610586062bf2f465eb9806f84067acb18e7cee299a1da1ea349e7447f4. May 27 03:12:07.063676 containerd[1550]: time="2025-05-27T03:12:07.063635618Z" level=info msg="StartContainer for \"7a5c78610586062bf2f465eb9806f84067acb18e7cee299a1da1ea349e7447f4\" returns successfully" May 27 03:12:07.069517 systemd[1]: cri-containerd-7a5c78610586062bf2f465eb9806f84067acb18e7cee299a1da1ea349e7447f4.scope: Deactivated successfully. May 27 03:12:07.070176 containerd[1550]: time="2025-05-27T03:12:07.070140723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a5c78610586062bf2f465eb9806f84067acb18e7cee299a1da1ea349e7447f4\" id:\"7a5c78610586062bf2f465eb9806f84067acb18e7cee299a1da1ea349e7447f4\" pid:4617 exited_at:{seconds:1748315527 nanos:69852503}" May 27 03:12:07.070391 containerd[1550]: time="2025-05-27T03:12:07.070341850Z" level=info msg="received exit event container_id:\"7a5c78610586062bf2f465eb9806f84067acb18e7cee299a1da1ea349e7447f4\" id:\"7a5c78610586062bf2f465eb9806f84067acb18e7cee299a1da1ea349e7447f4\" pid:4617 exited_at:{seconds:1748315527 nanos:69852503}" May 27 03:12:07.090920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a5c78610586062bf2f465eb9806f84067acb18e7cee299a1da1ea349e7447f4-rootfs.mount: Deactivated successfully. May 27 03:12:07.694066 kubelet[2683]: E0527 03:12:07.694026 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:07.982195 kubelet[2683]: E0527 03:12:07.982064 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:07.984897 containerd[1550]: time="2025-05-27T03:12:07.984844732Z" level=info msg="CreateContainer within sandbox \"c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 03:12:08.019972 containerd[1550]: time="2025-05-27T03:12:08.019241205Z" level=info msg="Container 10bcb69963a50e0c40ebcf4abb25ed4bf53d541f5ea8382e52b5e373fcfea5ae: CDI devices from CRI Config.CDIDevices: []" May 27 03:12:08.039337 containerd[1550]: time="2025-05-27T03:12:08.039264247Z" level=info msg="CreateContainer within sandbox \"c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"10bcb69963a50e0c40ebcf4abb25ed4bf53d541f5ea8382e52b5e373fcfea5ae\"" May 27 03:12:08.039911 containerd[1550]: time="2025-05-27T03:12:08.039879793Z" level=info msg="StartContainer for \"10bcb69963a50e0c40ebcf4abb25ed4bf53d541f5ea8382e52b5e373fcfea5ae\"" May 27 03:12:08.041363 containerd[1550]: time="2025-05-27T03:12:08.041317022Z" level=info msg="connecting to shim 10bcb69963a50e0c40ebcf4abb25ed4bf53d541f5ea8382e52b5e373fcfea5ae" address="unix:///run/containerd/s/2fd312128dd8361f2eba7500a6062d01662db79aa31df401fb3c1be606ee9a46" protocol=ttrpc version=3 May 27 03:12:08.064899 systemd[1]: Started cri-containerd-10bcb69963a50e0c40ebcf4abb25ed4bf53d541f5ea8382e52b5e373fcfea5ae.scope - libcontainer container 10bcb69963a50e0c40ebcf4abb25ed4bf53d541f5ea8382e52b5e373fcfea5ae. May 27 03:12:08.108868 systemd[1]: cri-containerd-10bcb69963a50e0c40ebcf4abb25ed4bf53d541f5ea8382e52b5e373fcfea5ae.scope: Deactivated successfully. May 27 03:12:08.109865 containerd[1550]: time="2025-05-27T03:12:08.109825414Z" level=info msg="StartContainer for \"10bcb69963a50e0c40ebcf4abb25ed4bf53d541f5ea8382e52b5e373fcfea5ae\" returns successfully" May 27 03:12:08.110918 containerd[1550]: time="2025-05-27T03:12:08.110874934Z" level=info msg="received exit event container_id:\"10bcb69963a50e0c40ebcf4abb25ed4bf53d541f5ea8382e52b5e373fcfea5ae\" id:\"10bcb69963a50e0c40ebcf4abb25ed4bf53d541f5ea8382e52b5e373fcfea5ae\" pid:4662 exited_at:{seconds:1748315528 nanos:110397498}" May 27 03:12:08.111064 containerd[1550]: time="2025-05-27T03:12:08.111039394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10bcb69963a50e0c40ebcf4abb25ed4bf53d541f5ea8382e52b5e373fcfea5ae\" id:\"10bcb69963a50e0c40ebcf4abb25ed4bf53d541f5ea8382e52b5e373fcfea5ae\" pid:4662 exited_at:{seconds:1748315528 nanos:110397498}" May 27 03:12:08.138503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10bcb69963a50e0c40ebcf4abb25ed4bf53d541f5ea8382e52b5e373fcfea5ae-rootfs.mount: Deactivated successfully. May 27 03:12:08.987315 kubelet[2683]: E0527 03:12:08.987277 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:08.989620 containerd[1550]: time="2025-05-27T03:12:08.989556876Z" level=info msg="CreateContainer within sandbox \"c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 03:12:08.998304 containerd[1550]: time="2025-05-27T03:12:08.998262952Z" level=info msg="Container 08dc03dbcac938a69a65e6a1406795369326d984f285e6706bd96e8bf17e0790: CDI devices from CRI Config.CDIDevices: []" May 27 03:12:09.007407 containerd[1550]: time="2025-05-27T03:12:09.007339578Z" level=info msg="CreateContainer within sandbox \"c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"08dc03dbcac938a69a65e6a1406795369326d984f285e6706bd96e8bf17e0790\"" May 27 03:12:09.008000 containerd[1550]: time="2025-05-27T03:12:09.007963441Z" level=info msg="StartContainer for \"08dc03dbcac938a69a65e6a1406795369326d984f285e6706bd96e8bf17e0790\"" May 27 03:12:09.009139 containerd[1550]: time="2025-05-27T03:12:09.009088656Z" level=info msg="connecting to shim 08dc03dbcac938a69a65e6a1406795369326d984f285e6706bd96e8bf17e0790" address="unix:///run/containerd/s/2fd312128dd8361f2eba7500a6062d01662db79aa31df401fb3c1be606ee9a46" protocol=ttrpc version=3 May 27 03:12:09.035870 systemd[1]: Started cri-containerd-08dc03dbcac938a69a65e6a1406795369326d984f285e6706bd96e8bf17e0790.scope - libcontainer container 08dc03dbcac938a69a65e6a1406795369326d984f285e6706bd96e8bf17e0790. May 27 03:12:09.063057 systemd[1]: cri-containerd-08dc03dbcac938a69a65e6a1406795369326d984f285e6706bd96e8bf17e0790.scope: Deactivated successfully. May 27 03:12:09.064130 containerd[1550]: time="2025-05-27T03:12:09.064009403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08dc03dbcac938a69a65e6a1406795369326d984f285e6706bd96e8bf17e0790\" id:\"08dc03dbcac938a69a65e6a1406795369326d984f285e6706bd96e8bf17e0790\" pid:4701 exited_at:{seconds:1748315529 nanos:63621514}" May 27 03:12:09.144204 containerd[1550]: time="2025-05-27T03:12:09.144133665Z" level=info msg="received exit event container_id:\"08dc03dbcac938a69a65e6a1406795369326d984f285e6706bd96e8bf17e0790\" id:\"08dc03dbcac938a69a65e6a1406795369326d984f285e6706bd96e8bf17e0790\" pid:4701 exited_at:{seconds:1748315529 nanos:63621514}" May 27 03:12:09.152333 containerd[1550]: time="2025-05-27T03:12:09.152228925Z" level=info msg="StartContainer for \"08dc03dbcac938a69a65e6a1406795369326d984f285e6706bd96e8bf17e0790\" returns successfully" May 27 03:12:09.166117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08dc03dbcac938a69a65e6a1406795369326d984f285e6706bd96e8bf17e0790-rootfs.mount: Deactivated successfully. May 27 03:12:09.766067 kubelet[2683]: E0527 03:12:09.766010 2683 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 03:12:09.996739 kubelet[2683]: E0527 03:12:09.993982 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:10.001045 containerd[1550]: time="2025-05-27T03:12:10.000975287Z" level=info msg="CreateContainer within sandbox \"c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 03:12:10.018338 containerd[1550]: time="2025-05-27T03:12:10.018147464Z" level=info msg="Container 163ee4c664c823c68ada36a5b4e271cf83ebc8f8d6bf77313567f85489ae9b9e: CDI devices from CRI Config.CDIDevices: []" May 27 03:12:10.030450 containerd[1550]: time="2025-05-27T03:12:10.030382769Z" level=info msg="CreateContainer within sandbox \"c53882424b4c9241618d6672ffd9388db391fbc45ce2c6e225eaaba31dcfa6cc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"163ee4c664c823c68ada36a5b4e271cf83ebc8f8d6bf77313567f85489ae9b9e\"" May 27 03:12:10.031219 containerd[1550]: time="2025-05-27T03:12:10.031019247Z" level=info msg="StartContainer for \"163ee4c664c823c68ada36a5b4e271cf83ebc8f8d6bf77313567f85489ae9b9e\"" May 27 03:12:10.032217 containerd[1550]: time="2025-05-27T03:12:10.032186865Z" level=info msg="connecting to shim 163ee4c664c823c68ada36a5b4e271cf83ebc8f8d6bf77313567f85489ae9b9e" address="unix:///run/containerd/s/2fd312128dd8361f2eba7500a6062d01662db79aa31df401fb3c1be606ee9a46" protocol=ttrpc version=3 May 27 03:12:10.055922 systemd[1]: Started cri-containerd-163ee4c664c823c68ada36a5b4e271cf83ebc8f8d6bf77313567f85489ae9b9e.scope - libcontainer container 163ee4c664c823c68ada36a5b4e271cf83ebc8f8d6bf77313567f85489ae9b9e. May 27 03:12:10.109373 containerd[1550]: time="2025-05-27T03:12:10.109331662Z" level=info msg="StartContainer for \"163ee4c664c823c68ada36a5b4e271cf83ebc8f8d6bf77313567f85489ae9b9e\" returns successfully" May 27 03:12:10.196088 containerd[1550]: time="2025-05-27T03:12:10.195786697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"163ee4c664c823c68ada36a5b4e271cf83ebc8f8d6bf77313567f85489ae9b9e\" id:\"32bd5032d6c765d93fa42a51ae963f06bf2e0f7e22adc9d7bf5b2ec28e7d7235\" pid:4769 exited_at:{seconds:1748315530 nanos:195391122}" May 27 03:12:10.644782 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 27 03:12:10.693349 kubelet[2683]: E0527 03:12:10.693264 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:11.000320 kubelet[2683]: E0527 03:12:11.000259 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:11.161098 kubelet[2683]: I0527 03:12:11.161018 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mwrhq" podStartSLOduration=6.160997166 podStartE2EDuration="6.160997166s" podCreationTimestamp="2025-05-27 03:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:12:11.160854307 +0000 UTC m=+91.587162333" watchObservedRunningTime="2025-05-27 03:12:11.160997166 +0000 UTC m=+91.587305182" May 27 03:12:11.694151 kubelet[2683]: E0527 03:12:11.693524 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-k29k2" podUID="f151dbdd-88d0-4bd2-af84-8e423b06c5b7" May 27 03:12:12.104367 kubelet[2683]: I0527 03:12:12.104264 2683 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T03:12:12Z","lastTransitionTime":"2025-05-27T03:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 03:12:12.127361 kubelet[2683]: E0527 03:12:12.127312 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:12.415334 containerd[1550]: time="2025-05-27T03:12:12.415158570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"163ee4c664c823c68ada36a5b4e271cf83ebc8f8d6bf77313567f85489ae9b9e\" id:\"f57c98ea87de2461949c15309265d16d2b67a51e2790cc2bf82b9b4b6bb0015e\" pid:4909 exit_status:1 exited_at:{seconds:1748315532 nanos:414788301}" May 27 03:12:13.696358 kubelet[2683]: E0527 03:12:13.696276 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-k29k2" podUID="f151dbdd-88d0-4bd2-af84-8e423b06c5b7" May 27 03:12:14.102274 systemd-networkd[1488]: lxc_health: Link UP May 27 03:12:14.104887 systemd-networkd[1488]: lxc_health: Gained carrier May 27 03:12:14.129233 kubelet[2683]: E0527 03:12:14.128065 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:14.561124 containerd[1550]: time="2025-05-27T03:12:14.561051740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"163ee4c664c823c68ada36a5b4e271cf83ebc8f8d6bf77313567f85489ae9b9e\" id:\"ce2291fdd639b9aa428f4a294eb0c3fa75ca614e344999aa80742b064ab38ae9\" pid:5294 exited_at:{seconds:1748315534 nanos:560496439}" May 27 03:12:15.009071 kubelet[2683]: E0527 03:12:15.009020 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:15.197237 systemd-networkd[1488]: lxc_health: Gained IPv6LL May 27 03:12:15.697952 kubelet[2683]: E0527 03:12:15.697506 2683 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:12:16.681134 containerd[1550]: time="2025-05-27T03:12:16.681031592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"163ee4c664c823c68ada36a5b4e271cf83ebc8f8d6bf77313567f85489ae9b9e\" id:\"1e036c73c8697b03ce12c8aa1f7aae93e97de2b8d42cbac6424a88f3c15d2931\" pid:5331 exited_at:{seconds:1748315536 nanos:680104624}" May 27 03:12:18.776330 containerd[1550]: time="2025-05-27T03:12:18.776210078Z" level=info msg="TaskExit event in podsandbox handler container_id:\"163ee4c664c823c68ada36a5b4e271cf83ebc8f8d6bf77313567f85489ae9b9e\" id:\"2412c8cd44d7ce749ed55b46692df95276db8384b0147c2b18f8fcc9917ac776\" pid:5364 exited_at:{seconds:1748315538 nanos:775756266}" May 27 03:12:20.883122 containerd[1550]: time="2025-05-27T03:12:20.883061734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"163ee4c664c823c68ada36a5b4e271cf83ebc8f8d6bf77313567f85489ae9b9e\" id:\"e57150b3a2c8dab9e6ef218474a68358248121cca162a25384fdc4ac400c4d35\" pid:5388 exited_at:{seconds:1748315540 nanos:882588713}" May 27 03:12:20.898105 sshd[4501]: Connection closed by 10.0.0.1 port 39124 May 27 03:12:20.898502 sshd-session[4497]: pam_unix(sshd:session): session closed for user core May 27 03:12:20.902902 systemd[1]: sshd@27-10.0.0.11:22-10.0.0.1:39124.service: Deactivated successfully. May 27 03:12:20.904923 systemd[1]: session-28.scope: Deactivated successfully. May 27 03:12:20.905672 systemd-logind[1537]: Session 28 logged out. Waiting for processes to exit. May 27 03:12:20.906838 systemd-logind[1537]: Removed session 28.