May 16 00:24:49.909413 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:08:20 -00 2025 May 16 00:24:49.909442 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 00:24:49.909457 kernel: BIOS-provided physical RAM map: May 16 00:24:49.909466 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 16 00:24:49.909475 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 16 00:24:49.909483 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 16 00:24:49.909494 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 16 00:24:49.909505 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 16 00:24:49.909516 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 16 00:24:49.909527 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 16 00:24:49.909543 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 00:24:49.909554 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 16 00:24:49.909588 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 00:24:49.909602 kernel: NX (Execute Disable) protection: active May 16 00:24:49.909626 kernel: APIC: Static calls initialized May 16 00:24:49.909643 kernel: SMBIOS 2.8 present. May 16 00:24:49.909655 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 16 00:24:49.909681 kernel: Hypervisor detected: KVM May 16 00:24:49.909708 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 00:24:49.909721 kernel: kvm-clock: using sched offset of 2343481461 cycles May 16 00:24:49.909733 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 00:24:49.909743 kernel: tsc: Detected 2794.748 MHz processor May 16 00:24:49.909754 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 00:24:49.909764 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 00:24:49.909774 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 16 00:24:49.909788 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 16 00:24:49.909798 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 00:24:49.909808 kernel: Using GB pages for direct mapping May 16 00:24:49.909818 kernel: ACPI: Early table checksum verification disabled May 16 00:24:49.909828 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 16 00:24:49.909838 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:24:49.909848 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:24:49.909858 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:24:49.909868 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 16 00:24:49.909881 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:24:49.909891 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:24:49.909901 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:24:49.909911 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:24:49.909921 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 16 00:24:49.909931 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 16 00:24:49.909946 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 16 00:24:49.909959 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 16 00:24:49.909969 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 16 00:24:49.909979 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 16 00:24:49.909990 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 16 00:24:49.910000 kernel: No NUMA configuration found May 16 00:24:49.910010 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 16 00:24:49.910020 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 16 00:24:49.910034 kernel: Zone ranges: May 16 00:24:49.910045 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 00:24:49.910055 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 16 00:24:49.910065 kernel: Normal empty May 16 00:24:49.910075 kernel: Movable zone start for each node May 16 00:24:49.910085 kernel: Early memory node ranges May 16 00:24:49.910096 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 16 00:24:49.910106 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 16 00:24:49.910116 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 16 00:24:49.910127 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 00:24:49.910140 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 16 00:24:49.910150 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 16 00:24:49.910161 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 00:24:49.910171 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 00:24:49.910181 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 00:24:49.910192 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 00:24:49.910202 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 00:24:49.910212 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 00:24:49.910222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 00:24:49.910236 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 00:24:49.910246 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 00:24:49.910256 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 00:24:49.910266 kernel: TSC deadline timer available May 16 00:24:49.910277 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 16 00:24:49.910287 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 00:24:49.910298 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 00:24:49.910308 kernel: kvm-guest: setup PV sched yield May 16 00:24:49.910318 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 16 00:24:49.910331 kernel: Booting paravirtualized kernel on KVM May 16 00:24:49.910342 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 00:24:49.910352 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 00:24:49.910363 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 16 00:24:49.910373 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 16 00:24:49.910383 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 00:24:49.910393 kernel: kvm-guest: PV spinlocks enabled May 16 00:24:49.910403 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 00:24:49.910415 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 00:24:49.910429 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:24:49.910439 kernel: random: crng init done May 16 00:24:49.910450 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:24:49.910460 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:24:49.910470 kernel: Fallback order for Node 0: 0 May 16 00:24:49.910480 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 16 00:24:49.910491 kernel: Policy zone: DMA32 May 16 00:24:49.910502 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:24:49.910517 kernel: Memory: 2430496K/2571752K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43600K init, 1472K bss, 140996K reserved, 0K cma-reserved) May 16 00:24:49.910527 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:24:49.910537 kernel: ftrace: allocating 37997 entries in 149 pages May 16 00:24:49.910548 kernel: ftrace: allocated 149 pages with 4 groups May 16 00:24:49.910558 kernel: Dynamic Preempt: voluntary May 16 00:24:49.910579 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:24:49.910590 kernel: rcu: RCU event tracing is enabled. May 16 00:24:49.910600 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:24:49.910611 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:24:49.910625 kernel: Rude variant of Tasks RCU enabled. May 16 00:24:49.910635 kernel: Tracing variant of Tasks RCU enabled. May 16 00:24:49.910645 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:24:49.910655 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:24:49.910684 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 00:24:49.910696 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 00:24:49.910706 kernel: Console: colour VGA+ 80x25 May 16 00:24:49.910716 kernel: printk: console [ttyS0] enabled May 16 00:24:49.910726 kernel: ACPI: Core revision 20230628 May 16 00:24:49.910737 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 00:24:49.910751 kernel: APIC: Switch to symmetric I/O mode setup May 16 00:24:49.910761 kernel: x2apic enabled May 16 00:24:49.910771 kernel: APIC: Switched APIC routing to: physical x2apic May 16 00:24:49.910782 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 00:24:49.910792 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 00:24:49.910803 kernel: kvm-guest: setup PV IPIs May 16 00:24:49.910825 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 00:24:49.910836 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 16 00:24:49.910863 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 16 00:24:49.910895 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 00:24:49.910907 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 00:24:49.910921 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 00:24:49.910932 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 00:24:49.910943 kernel: Spectre V2 : Mitigation: Retpolines May 16 00:24:49.910954 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 16 00:24:49.910965 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 00:24:49.910982 kernel: RETBleed: Mitigation: untrained return thunk May 16 00:24:49.910993 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 00:24:49.911004 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 00:24:49.911015 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 00:24:49.911026 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 00:24:49.911037 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 00:24:49.911048 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 00:24:49.911058 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 00:24:49.911072 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 00:24:49.911083 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 00:24:49.911094 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 00:24:49.911104 kernel: Freeing SMP alternatives memory: 32K May 16 00:24:49.911115 kernel: pid_max: default: 32768 minimum: 301 May 16 00:24:49.911126 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 00:24:49.911136 kernel: landlock: Up and running. May 16 00:24:49.911147 kernel: SELinux: Initializing. May 16 00:24:49.911158 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:24:49.911172 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:24:49.911182 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 00:24:49.911193 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:24:49.911204 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:24:49.911215 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:24:49.911226 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 00:24:49.911236 kernel: ... version: 0 May 16 00:24:49.911247 kernel: ... bit width: 48 May 16 00:24:49.911257 kernel: ... generic registers: 6 May 16 00:24:49.911271 kernel: ... value mask: 0000ffffffffffff May 16 00:24:49.911282 kernel: ... max period: 00007fffffffffff May 16 00:24:49.911292 kernel: ... fixed-purpose events: 0 May 16 00:24:49.911303 kernel: ... event mask: 000000000000003f May 16 00:24:49.911313 kernel: signal: max sigframe size: 1776 May 16 00:24:49.911324 kernel: rcu: Hierarchical SRCU implementation. May 16 00:24:49.911335 kernel: rcu: Max phase no-delay instances is 400. May 16 00:24:49.911345 kernel: smp: Bringing up secondary CPUs ... May 16 00:24:49.911356 kernel: smpboot: x86: Booting SMP configuration: May 16 00:24:49.911370 kernel: .... node #0, CPUs: #1 #2 #3 May 16 00:24:49.911380 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:24:49.911390 kernel: smpboot: Max logical packages: 1 May 16 00:24:49.911401 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 16 00:24:49.911412 kernel: devtmpfs: initialized May 16 00:24:49.911422 kernel: x86/mm: Memory block size: 128MB May 16 00:24:49.911433 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:24:49.911444 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:24:49.911454 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:24:49.911468 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:24:49.911478 kernel: audit: initializing netlink subsys (disabled) May 16 00:24:49.911489 kernel: audit: type=2000 audit(1747355088.469:1): state=initialized audit_enabled=0 res=1 May 16 00:24:49.911500 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:24:49.911511 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 00:24:49.911523 kernel: cpuidle: using governor menu May 16 00:24:49.911536 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:24:49.911549 kernel: dca service started, version 1.12.1 May 16 00:24:49.911563 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 16 00:24:49.911595 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 16 00:24:49.911608 kernel: PCI: Using configuration type 1 for base access May 16 00:24:49.911622 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 00:24:49.911635 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:24:49.911649 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 00:24:49.911662 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:24:49.911703 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 00:24:49.911717 kernel: ACPI: Added _OSI(Module Device) May 16 00:24:49.911731 kernel: ACPI: Added _OSI(Processor Device) May 16 00:24:49.911746 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:24:49.911756 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:24:49.911767 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:24:49.911786 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 16 00:24:49.911812 kernel: ACPI: Interpreter enabled May 16 00:24:49.911825 kernel: ACPI: PM: (supports S0 S3 S5) May 16 00:24:49.911836 kernel: ACPI: Using IOAPIC for interrupt routing May 16 00:24:49.911847 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 00:24:49.911858 kernel: PCI: Using E820 reservations for host bridge windows May 16 00:24:49.911872 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 00:24:49.911887 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:24:49.912102 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:24:49.912268 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 00:24:49.912426 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 00:24:49.912442 kernel: PCI host bridge to bus 0000:00 May 16 00:24:49.912615 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 00:24:49.912802 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 00:24:49.912948 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 00:24:49.913090 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 16 00:24:49.913232 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 16 00:24:49.913375 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 16 00:24:49.913540 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:24:49.913755 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 16 00:24:49.913930 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 16 00:24:49.914090 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 16 00:24:49.914245 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 16 00:24:49.914401 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 16 00:24:49.914583 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 00:24:49.914805 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:24:49.914973 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 16 00:24:49.915132 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 16 00:24:49.915290 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 16 00:24:49.915459 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 16 00:24:49.915631 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 16 00:24:49.915806 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 16 00:24:49.915967 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 16 00:24:49.916136 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 16 00:24:49.916301 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 16 00:24:49.916458 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 16 00:24:49.916632 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 16 00:24:49.916818 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 16 00:24:49.916985 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 16 00:24:49.917142 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 00:24:49.917315 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 16 00:24:49.917474 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 16 00:24:49.917640 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 16 00:24:49.917830 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 16 00:24:49.918038 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 16 00:24:49.918055 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 00:24:49.918066 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 00:24:49.918081 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 00:24:49.918092 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 00:24:49.918103 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 00:24:49.918113 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 00:24:49.918124 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 00:24:49.918135 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 00:24:49.918145 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 00:24:49.918156 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 00:24:49.918167 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 00:24:49.918181 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 00:24:49.918191 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 00:24:49.918202 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 00:24:49.918212 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 00:24:49.918223 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 00:24:49.918234 kernel: iommu: Default domain type: Translated May 16 00:24:49.918245 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 00:24:49.918256 kernel: PCI: Using ACPI for IRQ routing May 16 00:24:49.918266 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 00:24:49.918280 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 16 00:24:49.918291 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 16 00:24:49.918449 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 00:24:49.918615 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 00:24:49.918789 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 00:24:49.918804 kernel: vgaarb: loaded May 16 00:24:49.918816 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 00:24:49.918826 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 00:24:49.918842 kernel: clocksource: Switched to clocksource kvm-clock May 16 00:24:49.918853 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:24:49.918864 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:24:49.918875 kernel: pnp: PnP ACPI init May 16 00:24:49.919046 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 16 00:24:49.919062 kernel: pnp: PnP ACPI: found 6 devices May 16 00:24:49.919073 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 00:24:49.919084 kernel: NET: Registered PF_INET protocol family May 16 00:24:49.919099 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:24:49.919109 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:24:49.919120 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:24:49.919131 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:24:49.919142 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 00:24:49.919153 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:24:49.919164 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:24:49.919175 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:24:49.919185 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:24:49.919199 kernel: NET: Registered PF_XDP protocol family May 16 00:24:49.919343 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 00:24:49.919489 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 00:24:49.919682 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 00:24:49.919833 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 16 00:24:49.919976 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 16 00:24:49.920119 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 16 00:24:49.920134 kernel: PCI: CLS 0 bytes, default 64 May 16 00:24:49.920145 kernel: Initialise system trusted keyrings May 16 00:24:49.920161 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:24:49.920172 kernel: Key type asymmetric registered May 16 00:24:49.920182 kernel: Asymmetric key parser 'x509' registered May 16 00:24:49.920193 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 16 00:24:49.920204 kernel: io scheduler mq-deadline registered May 16 00:24:49.920215 kernel: io scheduler kyber registered May 16 00:24:49.920225 kernel: io scheduler bfq registered May 16 00:24:49.920236 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 00:24:49.920247 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 00:24:49.920261 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 00:24:49.920272 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 00:24:49.920283 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:24:49.920294 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 00:24:49.920304 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 00:24:49.920315 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 00:24:49.920326 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 00:24:49.920488 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 00:24:49.920654 kernel: rtc_cmos 00:04: registered as rtc0 May 16 00:24:49.920721 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 00:24:49.920873 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T00:24:49 UTC (1747355089) May 16 00:24:49.921020 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 16 00:24:49.921035 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 00:24:49.921045 kernel: NET: Registered PF_INET6 protocol family May 16 00:24:49.921056 kernel: Segment Routing with IPv6 May 16 00:24:49.921067 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:24:49.921077 kernel: NET: Registered PF_PACKET protocol family May 16 00:24:49.921093 kernel: Key type dns_resolver registered May 16 00:24:49.921103 kernel: IPI shorthand broadcast: enabled May 16 00:24:49.921114 kernel: sched_clock: Marking stable (566002832, 106643206)->(726193301, -53547263) May 16 00:24:49.921125 kernel: registered taskstats version 1 May 16 00:24:49.921135 kernel: Loading compiled-in X.509 certificates May 16 00:24:49.921146 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 36d9e3bf63b9b28466bcfa7a508d814673a33a26' May 16 00:24:49.921157 kernel: Key type .fscrypt registered May 16 00:24:49.921167 kernel: Key type fscrypt-provisioning registered May 16 00:24:49.921178 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:24:49.921192 kernel: ima: Allocated hash algorithm: sha1 May 16 00:24:49.921203 kernel: ima: No architecture policies found May 16 00:24:49.921213 kernel: clk: Disabling unused clocks May 16 00:24:49.921224 kernel: Freeing unused kernel image (initmem) memory: 43600K May 16 00:24:49.921235 kernel: Write protecting the kernel read-only data: 40960k May 16 00:24:49.921246 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 16 00:24:49.921256 kernel: Run /init as init process May 16 00:24:49.921267 kernel: with arguments: May 16 00:24:49.921280 kernel: /init May 16 00:24:49.921290 kernel: with environment: May 16 00:24:49.921301 kernel: HOME=/ May 16 00:24:49.921311 kernel: TERM=linux May 16 00:24:49.921321 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:24:49.921333 systemd[1]: Successfully made /usr/ read-only. May 16 00:24:49.921348 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 00:24:49.921360 systemd[1]: Detected virtualization kvm. May 16 00:24:49.921374 systemd[1]: Detected architecture x86-64. May 16 00:24:49.921385 systemd[1]: Running in initrd. May 16 00:24:49.921396 systemd[1]: No hostname configured, using default hostname. May 16 00:24:49.921408 systemd[1]: Hostname set to . May 16 00:24:49.921420 systemd[1]: Initializing machine ID from VM UUID. May 16 00:24:49.921431 systemd[1]: Queued start job for default target initrd.target. May 16 00:24:49.921443 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:24:49.921454 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:24:49.921470 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 00:24:49.921496 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:24:49.921511 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 00:24:49.921524 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 00:24:49.921537 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 00:24:49.921552 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 00:24:49.921574 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:24:49.921586 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:24:49.921598 systemd[1]: Reached target paths.target - Path Units. May 16 00:24:49.921610 systemd[1]: Reached target slices.target - Slice Units. May 16 00:24:49.921621 systemd[1]: Reached target swap.target - Swaps. May 16 00:24:49.921633 systemd[1]: Reached target timers.target - Timer Units. May 16 00:24:49.921645 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:24:49.921660 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:24:49.921687 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 00:24:49.921699 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 00:24:49.921711 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:24:49.921723 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:24:49.921735 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:24:49.921746 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:24:49.921758 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 00:24:49.921770 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:24:49.921785 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 00:24:49.921797 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:24:49.921809 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:24:49.921821 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:24:49.921832 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:24:49.921844 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 00:24:49.921856 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:24:49.921872 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:24:49.921884 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:24:49.921921 systemd-journald[193]: Collecting audit messages is disabled. May 16 00:24:49.921954 systemd-journald[193]: Journal started May 16 00:24:49.921979 systemd-journald[193]: Runtime Journal (/run/log/journal/1cde6f1cf8574bcd909058505b59a8f7) is 6M, max 48.3M, 42.3M free. May 16 00:24:49.909660 systemd-modules-load[195]: Inserted module 'overlay' May 16 00:24:49.944948 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:24:49.944970 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:24:49.944984 kernel: Bridge firewalling registered May 16 00:24:49.936356 systemd-modules-load[195]: Inserted module 'br_netfilter' May 16 00:24:49.945251 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:24:49.945901 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:24:49.949378 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:24:49.950436 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:24:49.954506 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:24:49.979278 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:24:49.982992 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:24:49.984273 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:24:49.988339 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:24:49.990135 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 00:24:49.995059 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:24:49.998091 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:24:49.999380 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:24:50.010520 dracut-cmdline[227]: dracut-dracut-053 May 16 00:24:50.015598 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 00:24:50.054789 systemd-resolved[229]: Positive Trust Anchors: May 16 00:24:50.054803 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:24:50.054842 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:24:50.057905 systemd-resolved[229]: Defaulting to hostname 'linux'. May 16 00:24:50.059014 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:24:50.064313 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:24:50.104696 kernel: SCSI subsystem initialized May 16 00:24:50.114702 kernel: Loading iSCSI transport class v2.0-870. May 16 00:24:50.124695 kernel: iscsi: registered transport (tcp) May 16 00:24:50.145941 kernel: iscsi: registered transport (qla4xxx) May 16 00:24:50.146016 kernel: QLogic iSCSI HBA Driver May 16 00:24:50.191414 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 00:24:50.194016 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 00:24:50.236203 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:24:50.236278 kernel: device-mapper: uevent: version 1.0.3 May 16 00:24:50.237237 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 00:24:50.277705 kernel: raid6: avx2x4 gen() 30062 MB/s May 16 00:24:50.294715 kernel: raid6: avx2x2 gen() 30875 MB/s May 16 00:24:50.311821 kernel: raid6: avx2x1 gen() 25564 MB/s May 16 00:24:50.311885 kernel: raid6: using algorithm avx2x2 gen() 30875 MB/s May 16 00:24:50.329855 kernel: raid6: .... xor() 19777 MB/s, rmw enabled May 16 00:24:50.329895 kernel: raid6: using avx2x2 recovery algorithm May 16 00:24:50.350698 kernel: xor: automatically using best checksumming function avx May 16 00:24:50.495702 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 00:24:50.506705 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 00:24:50.509297 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:24:50.538127 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 16 00:24:50.543730 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:24:50.548379 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 00:24:50.570804 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation May 16 00:24:50.602420 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:24:50.605179 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:24:50.684393 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:24:50.687204 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 00:24:50.705934 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 00:24:50.709344 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:24:50.712395 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:24:50.713886 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:24:50.718633 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 00:24:50.724706 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 00:24:50.728026 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:24:50.731688 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:24:50.738305 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:24:50.738337 kernel: GPT:9289727 != 19775487 May 16 00:24:50.738349 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:24:50.738359 kernel: GPT:9289727 != 19775487 May 16 00:24:50.738870 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:24:50.740182 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:24:50.751018 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 00:24:50.759904 kernel: AVX2 version of gcm_enc/dec engaged. May 16 00:24:50.759957 kernel: AES CTR mode by8 optimization enabled May 16 00:24:50.761694 kernel: libata version 3.00 loaded. May 16 00:24:50.769260 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:24:50.769412 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:24:50.770442 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:24:50.771197 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:24:50.771360 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:24:50.782056 kernel: ahci 0000:00:1f.2: version 3.0 May 16 00:24:50.782290 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 00:24:50.776968 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:24:50.789254 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 16 00:24:50.789477 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 00:24:50.781422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:24:50.791685 kernel: scsi host0: ahci May 16 00:24:50.791855 kernel: scsi host1: ahci May 16 00:24:50.792000 kernel: scsi host2: ahci May 16 00:24:50.792688 kernel: scsi host3: ahci May 16 00:24:50.795698 kernel: scsi host4: ahci May 16 00:24:50.800360 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (474) May 16 00:24:50.803740 kernel: BTRFS: device fsid a728581e-9e7f-4655-895a-4f66e17e3645 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (462) May 16 00:24:50.808689 kernel: scsi host5: ahci May 16 00:24:50.814178 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 16 00:24:50.814202 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 16 00:24:50.814214 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 16 00:24:50.814225 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 16 00:24:50.814235 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 16 00:24:50.814250 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 16 00:24:50.826198 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 00:24:50.856999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:24:50.867027 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 00:24:50.876465 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 00:24:50.884164 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 00:24:50.884593 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 00:24:50.888714 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 00:24:50.892215 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:24:50.908888 disk-uuid[560]: Primary Header is updated. May 16 00:24:50.908888 disk-uuid[560]: Secondary Entries is updated. May 16 00:24:50.908888 disk-uuid[560]: Secondary Header is updated. May 16 00:24:50.913157 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:24:50.916079 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:24:51.122576 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 00:24:51.122653 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 00:24:51.122694 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 00:24:51.123691 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 00:24:51.123715 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 00:24:51.124703 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 00:24:51.125699 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 00:24:51.125722 kernel: ata3.00: applying bridge limits May 16 00:24:51.126693 kernel: ata3.00: configured for UDMA/100 May 16 00:24:51.127704 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 00:24:51.171243 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 00:24:51.171493 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 00:24:51.183697 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 00:24:51.921765 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:24:51.921860 disk-uuid[565]: The operation has completed successfully. May 16 00:24:51.957081 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:24:51.957201 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 00:24:51.992114 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 00:24:52.011430 sh[592]: Success May 16 00:24:52.024714 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 16 00:24:52.060329 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 00:24:52.062796 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 00:24:52.083458 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 00:24:52.089398 kernel: BTRFS info (device dm-0): first mount of filesystem a728581e-9e7f-4655-895a-4f66e17e3645 May 16 00:24:52.089447 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 00:24:52.089477 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 00:24:52.091378 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 00:24:52.091402 kernel: BTRFS info (device dm-0): using free space tree May 16 00:24:52.097294 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 00:24:52.098951 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 00:24:52.099810 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 00:24:52.102689 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 00:24:52.136584 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:24:52.136638 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:24:52.136650 kernel: BTRFS info (device vda6): using free space tree May 16 00:24:52.140701 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:24:52.145710 kernel: BTRFS info (device vda6): last unmount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:24:52.152442 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 00:24:52.155147 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 00:24:52.227204 ignition[685]: Ignition 2.20.0 May 16 00:24:52.228251 ignition[685]: Stage: fetch-offline May 16 00:24:52.228295 ignition[685]: no configs at "/usr/lib/ignition/base.d" May 16 00:24:52.228308 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:24:52.228452 ignition[685]: parsed url from cmdline: "" May 16 00:24:52.228457 ignition[685]: no config URL provided May 16 00:24:52.228466 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:24:52.228479 ignition[685]: no config at "/usr/lib/ignition/user.ign" May 16 00:24:52.228529 ignition[685]: op(1): [started] loading QEMU firmware config module May 16 00:24:52.228537 ignition[685]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:24:52.236604 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:24:52.239196 ignition[685]: op(1): [finished] loading QEMU firmware config module May 16 00:24:52.242559 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:24:52.283706 ignition[685]: parsing config with SHA512: 93817fb55d2516b3dc48e22d892d1b207f715508c12a9a851b90d0a7e72180e0d0f6d1c5296882070859522bc81bfda2a62604d34da5cc11f93bcbf8acd6ab24 May 16 00:24:52.288043 systemd-networkd[780]: lo: Link UP May 16 00:24:52.288054 systemd-networkd[780]: lo: Gained carrier May 16 00:24:52.290137 systemd-networkd[780]: Enumeration completed May 16 00:24:52.290239 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:24:52.291070 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:24:52.291076 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:24:52.296249 ignition[685]: fetch-offline: fetch-offline passed May 16 00:24:52.291917 systemd-networkd[780]: eth0: Link UP May 16 00:24:52.296317 ignition[685]: Ignition finished successfully May 16 00:24:52.291921 systemd-networkd[780]: eth0: Gained carrier May 16 00:24:52.291924 systemd[1]: Reached target network.target - Network. May 16 00:24:52.291929 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:24:52.295780 unknown[685]: fetched base config from "system" May 16 00:24:52.295791 unknown[685]: fetched user config from "qemu" May 16 00:24:52.298266 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:24:52.301255 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:24:52.302346 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 00:24:52.305743 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:24:52.339907 ignition[784]: Ignition 2.20.0 May 16 00:24:52.339918 ignition[784]: Stage: kargs May 16 00:24:52.340085 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 16 00:24:52.340096 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:24:52.341007 ignition[784]: kargs: kargs passed May 16 00:24:52.341054 ignition[784]: Ignition finished successfully May 16 00:24:52.344896 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 00:24:52.346981 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 00:24:52.373154 ignition[793]: Ignition 2.20.0 May 16 00:24:52.373710 ignition[793]: Stage: disks May 16 00:24:52.373879 ignition[793]: no configs at "/usr/lib/ignition/base.d" May 16 00:24:52.373890 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:24:52.374818 ignition[793]: disks: disks passed May 16 00:24:52.377167 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 00:24:52.374881 ignition[793]: Ignition finished successfully May 16 00:24:52.378630 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 00:24:52.380504 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 00:24:52.380916 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:24:52.381242 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:24:52.381409 systemd[1]: Reached target basic.target - Basic System. May 16 00:24:52.382785 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 00:24:52.407902 systemd-resolved[229]: Detected conflict on linux IN A 10.0.0.13 May 16 00:24:52.407916 systemd-resolved[229]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. May 16 00:24:52.411014 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 16 00:24:52.418059 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 00:24:52.419517 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 00:24:52.569702 kernel: EXT4-fs (vda9): mounted filesystem f27adc75-a467-4bfb-9c02-79a2879452a3 r/w with ordered data mode. Quota mode: none. May 16 00:24:52.570334 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 00:24:52.572013 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 00:24:52.574832 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:24:52.576390 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 00:24:52.578227 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 00:24:52.578279 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:24:52.578342 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:24:52.589166 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 00:24:52.591653 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 00:24:52.597882 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) May 16 00:24:52.597912 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:24:52.597928 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:24:52.597944 kernel: BTRFS info (device vda6): using free space tree May 16 00:24:52.600691 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:24:52.601576 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:24:52.630902 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:24:52.636388 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory May 16 00:24:52.640301 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:24:52.644134 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:24:52.736632 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 00:24:52.739166 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 00:24:52.741378 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 00:24:52.763705 kernel: BTRFS info (device vda6): last unmount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:24:52.782021 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 00:24:52.797245 ignition[926]: INFO : Ignition 2.20.0 May 16 00:24:52.797245 ignition[926]: INFO : Stage: mount May 16 00:24:52.799556 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:24:52.799556 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:24:52.799556 ignition[926]: INFO : mount: mount passed May 16 00:24:52.799556 ignition[926]: INFO : Ignition finished successfully May 16 00:24:52.801492 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 00:24:52.805047 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 00:24:53.088868 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 00:24:53.090579 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:24:53.114653 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (938) May 16 00:24:53.114703 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:24:53.116292 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:24:53.116307 kernel: BTRFS info (device vda6): using free space tree May 16 00:24:53.119699 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:24:53.120963 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:24:53.169386 ignition[955]: INFO : Ignition 2.20.0 May 16 00:24:53.169386 ignition[955]: INFO : Stage: files May 16 00:24:53.171696 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:24:53.171696 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:24:53.171696 ignition[955]: DEBUG : files: compiled without relabeling support, skipping May 16 00:24:53.171696 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:24:53.171696 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:24:53.179019 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:24:53.179019 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:24:53.179019 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:24:53.179019 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 16 00:24:53.179019 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 16 00:24:53.174295 unknown[955]: wrote ssh authorized keys file for user: core May 16 00:24:53.282439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 00:24:53.438243 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 16 00:24:53.438243 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:24:53.443200 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 16 00:24:53.796926 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 00:24:53.883186 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:24:53.883186 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 00:24:53.887990 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:24:53.887990 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:24:53.887990 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:24:53.887990 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:24:53.887990 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:24:53.887990 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:24:53.887990 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:24:53.887990 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:24:53.887990 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:24:53.887990 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:24:53.887990 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:24:53.887990 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:24:53.887990 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 16 00:24:53.946906 systemd-networkd[780]: eth0: Gained IPv6LL May 16 00:24:54.386373 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 00:24:54.788860 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:24:54.791358 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 00:24:54.793140 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:24:54.795499 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:24:54.795499 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 00:24:54.795499 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 00:24:54.800120 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:24:54.800120 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:24:54.800120 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 00:24:54.800120 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:24:54.817830 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:24:54.823170 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:24:54.824871 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:24:54.824871 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 00:24:54.824871 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:24:54.824871 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:24:54.824871 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:24:54.824871 ignition[955]: INFO : files: files passed May 16 00:24:54.824871 ignition[955]: INFO : Ignition finished successfully May 16 00:24:54.836351 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 00:24:54.838490 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 00:24:54.840514 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 00:24:54.854378 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:24:54.854536 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 00:24:54.858151 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory May 16 00:24:54.861589 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:24:54.861589 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 00:24:54.864774 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:24:54.868287 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:24:54.869843 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 00:24:54.872599 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 00:24:54.937301 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:24:54.937437 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 00:24:54.938384 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 00:24:54.940915 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 00:24:54.941279 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 00:24:54.946125 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 00:24:54.972833 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:24:54.974619 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 00:24:55.026665 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 00:24:55.027201 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:24:55.027578 systemd[1]: Stopped target timers.target - Timer Units. May 16 00:24:55.028097 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:24:55.028210 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:24:55.034759 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 00:24:55.035375 systemd[1]: Stopped target basic.target - Basic System. May 16 00:24:55.035744 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 00:24:55.040450 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:24:55.041173 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 00:24:55.041534 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 00:24:55.042051 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:24:55.047602 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 00:24:55.048097 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 00:24:55.048448 systemd[1]: Stopped target swap.target - Swaps. May 16 00:24:55.048940 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:24:55.049056 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 00:24:55.056831 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 00:24:55.057372 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:24:55.057684 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 00:24:55.061830 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:24:55.063916 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:24:55.064078 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 00:24:55.064937 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:24:55.065104 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:24:55.068547 systemd[1]: Stopped target paths.target - Path Units. May 16 00:24:55.070519 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:24:55.075722 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:24:55.077021 systemd[1]: Stopped target slices.target - Slice Units. May 16 00:24:55.079317 systemd[1]: Stopped target sockets.target - Socket Units. May 16 00:24:55.081109 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:24:55.081199 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:24:55.082950 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:24:55.083029 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:24:55.084806 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:24:55.084917 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:24:55.086856 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:24:55.086966 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 00:24:55.089747 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 00:24:55.090911 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:24:55.091039 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:24:55.094217 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 00:24:55.095794 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:24:55.095921 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:24:55.098334 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:24:55.098459 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:24:55.107647 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:24:55.107815 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 00:24:55.117616 ignition[1010]: INFO : Ignition 2.20.0 May 16 00:24:55.117616 ignition[1010]: INFO : Stage: umount May 16 00:24:55.119445 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:24:55.119445 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:24:55.119445 ignition[1010]: INFO : umount: umount passed May 16 00:24:55.119445 ignition[1010]: INFO : Ignition finished successfully May 16 00:24:55.122235 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:24:55.122434 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 00:24:55.125770 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:24:55.126369 systemd[1]: Stopped target network.target - Network. May 16 00:24:55.128275 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:24:55.128345 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 00:24:55.130256 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:24:55.130321 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 00:24:55.132323 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:24:55.132384 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 00:24:55.134329 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 00:24:55.134392 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 00:24:55.136601 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 00:24:55.138509 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 00:24:55.144923 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:24:55.145117 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 00:24:55.151600 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 00:24:55.151935 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:24:55.152109 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 00:24:55.156272 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 00:24:55.157447 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:24:55.157510 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 00:24:55.160010 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 00:24:55.161770 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:24:55.161827 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:24:55.164603 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:24:55.164654 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:24:55.168782 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:24:55.168859 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 00:24:55.171332 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 00:24:55.171385 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:24:55.174616 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:24:55.178164 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:24:55.178235 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 00:24:55.192870 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:24:55.193047 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:24:55.195855 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:24:55.195905 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 00:24:55.198292 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:24:55.198330 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:24:55.200686 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:24:55.200740 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 00:24:55.203245 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:24:55.203294 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 00:24:55.205713 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:24:55.205775 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:24:55.209307 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 00:24:55.210694 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:24:55.210747 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:24:55.213411 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:24:55.213467 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:24:55.216792 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 00:24:55.216858 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 00:24:55.220884 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:24:55.220992 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 00:24:55.229217 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:24:55.229338 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 00:24:56.136234 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:24:56.136384 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 00:24:56.137443 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 00:24:56.138938 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:24:56.138995 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 00:24:56.140196 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 00:24:56.162849 systemd[1]: Switching root. May 16 00:24:56.206544 systemd-journald[193]: Journal stopped May 16 00:24:59.216948 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 16 00:24:59.217044 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:24:59.217079 kernel: SELinux: policy capability open_perms=1 May 16 00:24:59.217097 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:24:59.217114 kernel: SELinux: policy capability always_check_network=0 May 16 00:24:59.217130 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:24:59.217146 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:24:59.217175 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:24:59.217193 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:24:59.217212 kernel: audit: type=1403 audit(1747355097.360:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:24:59.217238 systemd[1]: Successfully loaded SELinux policy in 61.825ms. May 16 00:24:59.217269 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 19.393ms. May 16 00:24:59.217304 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 00:24:59.217322 systemd[1]: Detected virtualization kvm. May 16 00:24:59.217340 systemd[1]: Detected architecture x86-64. May 16 00:24:59.218727 systemd[1]: Detected first boot. May 16 00:24:59.218756 systemd[1]: Initializing machine ID from VM UUID. May 16 00:24:59.218776 zram_generator::config[1057]: No configuration found. May 16 00:24:59.218796 kernel: Guest personality initialized and is inactive May 16 00:24:59.218814 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 00:24:59.218830 kernel: Initialized host personality May 16 00:24:59.218846 kernel: NET: Registered PF_VSOCK protocol family May 16 00:24:59.218869 systemd[1]: Populated /etc with preset unit settings. May 16 00:24:59.218888 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 00:24:59.218907 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:24:59.218930 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 00:24:59.218948 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:24:59.218967 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 00:24:59.218985 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 00:24:59.219004 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 00:24:59.219022 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 00:24:59.219042 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 00:24:59.219062 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 00:24:59.219084 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 00:24:59.219102 systemd[1]: Created slice user.slice - User and Session Slice. May 16 00:24:59.219127 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:24:59.219146 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:24:59.219164 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 00:24:59.219181 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 00:24:59.219199 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 00:24:59.219217 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:24:59.219238 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 00:24:59.219257 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:24:59.219286 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 00:24:59.219305 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 00:24:59.219324 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 00:24:59.219342 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 00:24:59.219361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:24:59.219379 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:24:59.219397 systemd[1]: Reached target slices.target - Slice Units. May 16 00:24:59.219419 systemd[1]: Reached target swap.target - Swaps. May 16 00:24:59.219437 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 00:24:59.219455 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 00:24:59.219476 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 00:24:59.219494 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:24:59.219512 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:24:59.219530 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:24:59.219548 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 00:24:59.219565 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 00:24:59.219586 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 00:24:59.219604 systemd[1]: Mounting media.mount - External Media Directory... May 16 00:24:59.219622 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:24:59.219640 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 00:24:59.219657 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 00:24:59.219704 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 00:24:59.219724 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:24:59.219742 systemd[1]: Reached target machines.target - Containers. May 16 00:24:59.219764 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 00:24:59.219783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:24:59.219800 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:24:59.219819 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 00:24:59.219837 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:24:59.219854 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:24:59.219871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:24:59.219889 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 00:24:59.219906 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:24:59.219930 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:24:59.219951 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:24:59.219970 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 00:24:59.219988 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:24:59.220007 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:24:59.220027 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:24:59.220045 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:24:59.220063 kernel: fuse: init (API version 7.39) May 16 00:24:59.220085 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:24:59.220103 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 00:24:59.220120 kernel: loop: module loaded May 16 00:24:59.220138 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 00:24:59.220157 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 00:24:59.220175 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:24:59.220193 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:24:59.220211 systemd[1]: Stopped verity-setup.service. May 16 00:24:59.220230 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:24:59.220253 kernel: ACPI: bus type drm_connector registered May 16 00:24:59.220271 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 00:24:59.222376 systemd-journald[1128]: Collecting audit messages is disabled. May 16 00:24:59.222419 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 00:24:59.222439 systemd[1]: Mounted media.mount - External Media Directory. May 16 00:24:59.222460 systemd-journald[1128]: Journal started May 16 00:24:59.222492 systemd-journald[1128]: Runtime Journal (/run/log/journal/1cde6f1cf8574bcd909058505b59a8f7) is 6M, max 48.3M, 42.3M free. May 16 00:24:58.738743 systemd[1]: Queued start job for default target multi-user.target. May 16 00:24:58.756939 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 00:24:58.759109 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:24:59.228930 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:24:59.229977 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 00:24:59.231515 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 00:24:59.233004 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 00:24:59.234711 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 00:24:59.236725 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:24:59.238876 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:24:59.239165 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 00:24:59.241120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:24:59.241453 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:24:59.243706 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:24:59.244011 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:24:59.245860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:24:59.246168 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:24:59.248351 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:24:59.248634 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 00:24:59.250525 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:24:59.250827 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:24:59.252931 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:24:59.254912 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 00:24:59.257343 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 00:24:59.259714 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 00:24:59.276696 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 00:24:59.286619 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 00:24:59.290032 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 00:24:59.291620 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:24:59.291664 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:24:59.297021 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 00:24:59.311158 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 00:24:59.315392 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 00:24:59.316964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:24:59.318811 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 00:24:59.322453 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 00:24:59.324024 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:24:59.329531 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 00:24:59.331021 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:24:59.334923 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:24:59.339108 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 00:24:59.345400 systemd-journald[1128]: Time spent on flushing to /var/log/journal/1cde6f1cf8574bcd909058505b59a8f7 is 42.462ms for 967 entries. May 16 00:24:59.345400 systemd-journald[1128]: System Journal (/var/log/journal/1cde6f1cf8574bcd909058505b59a8f7) is 8M, max 195.6M, 187.6M free. May 16 00:24:59.451539 systemd-journald[1128]: Received client request to flush runtime journal. May 16 00:24:59.451858 kernel: loop0: detected capacity change from 0 to 151640 May 16 00:24:59.343843 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 00:24:59.358402 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 00:24:59.361920 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 00:24:59.371119 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 00:24:59.415833 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:24:59.424910 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 00:24:59.428840 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 00:24:59.437816 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 00:24:59.463180 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 00:24:59.467784 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 00:24:59.477838 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 16 00:24:59.507414 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:24:59.535743 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:24:59.535359 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 00:24:59.553399 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 00:24:59.561703 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:24:59.577721 kernel: loop1: detected capacity change from 0 to 221472 May 16 00:24:59.604735 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 16 00:24:59.604761 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 16 00:24:59.618407 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:24:59.643708 kernel: loop2: detected capacity change from 0 to 109808 May 16 00:24:59.735338 kernel: loop3: detected capacity change from 0 to 151640 May 16 00:24:59.761905 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:24:59.790887 kernel: loop4: detected capacity change from 0 to 221472 May 16 00:24:59.856032 kernel: loop5: detected capacity change from 0 to 109808 May 16 00:24:59.873499 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 00:24:59.874507 (sd-merge)[1201]: Merged extensions into '/usr'. May 16 00:24:59.886051 systemd[1]: Reload requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... May 16 00:24:59.886086 systemd[1]: Reloading... May 16 00:24:59.978794 zram_generator::config[1232]: No configuration found. May 16 00:25:00.202322 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:25:00.223076 ldconfig[1172]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:25:00.299586 systemd[1]: Reloading finished in 412 ms. May 16 00:25:00.342776 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 00:25:00.358433 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 00:25:00.389471 systemd[1]: Starting ensure-sysext.service... May 16 00:25:00.394492 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:25:00.426467 systemd[1]: Reload requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... May 16 00:25:00.426489 systemd[1]: Reloading... May 16 00:25:00.463179 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:25:00.463558 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 00:25:00.465486 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:25:00.465989 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. May 16 00:25:00.466179 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. May 16 00:25:00.482166 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:25:00.482187 systemd-tmpfiles[1268]: Skipping /boot May 16 00:25:00.510703 zram_generator::config[1303]: No configuration found. May 16 00:25:00.516340 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:25:00.516361 systemd-tmpfiles[1268]: Skipping /boot May 16 00:25:00.693966 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:25:00.809797 systemd[1]: Reloading finished in 382 ms. May 16 00:25:00.837803 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 00:25:00.872993 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:25:00.902621 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:25:00.927447 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 00:25:00.941600 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 00:25:00.958466 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:25:00.969170 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:25:00.990471 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 00:25:01.000300 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:25:01.000524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:25:01.006935 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:25:01.021359 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:25:01.049657 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:25:01.052620 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:25:01.052850 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:25:01.062019 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 00:25:01.070635 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:25:01.072759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:25:01.073058 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:25:01.083999 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:25:01.087160 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:25:01.106381 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:25:01.106755 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:25:01.114376 systemd-udevd[1346]: Using default interface naming scheme 'v255'. May 16 00:25:01.119405 augenrules[1365]: No rules May 16 00:25:01.120306 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 00:25:01.124341 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:25:01.124717 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:25:01.149088 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 00:25:01.155331 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:25:01.155609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:25:01.165346 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:25:01.172148 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:25:01.198566 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:25:01.205859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:25:01.206058 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:25:01.214799 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 00:25:01.221478 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:25:01.224489 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 00:25:01.234500 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:25:01.234831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:25:01.242331 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:25:01.245881 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:25:01.249776 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:25:01.250164 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:25:01.266539 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 00:25:01.269326 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 00:25:01.276499 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:25:01.292820 systemd[1]: Finished ensure-sysext.service. May 16 00:25:01.304598 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:25:01.308656 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:25:01.314324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:25:01.319723 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:25:01.329946 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:25:01.334645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:25:01.355281 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:25:01.358224 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:25:01.358371 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:25:01.367359 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:25:01.379540 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 00:25:01.381118 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:25:01.381167 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:25:01.382139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:25:01.382432 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:25:01.384327 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:25:01.384785 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:25:01.390120 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:25:01.390868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:25:01.415622 augenrules[1404]: /sbin/augenrules: No change May 16 00:25:01.438986 augenrules[1437]: No rules May 16 00:25:01.439398 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:25:01.440827 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:25:01.444068 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:25:01.444409 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:25:01.468940 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:25:01.469052 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:25:01.489084 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 00:25:01.527713 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1390) May 16 00:25:01.529831 systemd-resolved[1345]: Positive Trust Anchors: May 16 00:25:01.529851 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:25:01.529888 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:25:01.541552 systemd-resolved[1345]: Defaulting to hostname 'linux'. May 16 00:25:01.545553 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 00:25:01.547569 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:25:01.549357 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:25:01.555986 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 00:25:01.599941 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 00:25:01.644664 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 00:25:01.656921 systemd[1]: Reached target time-set.target - System Time Set. May 16 00:25:01.670692 systemd-networkd[1417]: lo: Link UP May 16 00:25:01.670706 systemd-networkd[1417]: lo: Gained carrier May 16 00:25:01.677128 systemd-networkd[1417]: Enumeration completed May 16 00:25:01.677337 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:25:01.684140 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:25:01.684152 systemd-networkd[1417]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:25:01.685121 systemd-networkd[1417]: eth0: Link UP May 16 00:25:01.685133 systemd-networkd[1417]: eth0: Gained carrier May 16 00:25:01.685150 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:25:01.685974 systemd[1]: Reached target network.target - Network. May 16 00:25:01.690378 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 00:25:01.702459 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 00:25:01.713988 systemd-networkd[1417]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:25:01.715872 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. May 16 00:25:01.293085 systemd-timesyncd[1424]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:25:01.323016 systemd-journald[1128]: Time jumped backwards, rotating. May 16 00:25:01.293142 systemd-timesyncd[1424]: Initial clock synchronization to Fri 2025-05-16 00:25:01.292970 UTC. May 16 00:25:01.293196 systemd-resolved[1345]: Clock change detected. Flushing caches. May 16 00:25:01.303031 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:25:01.343377 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 16 00:25:01.344278 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 00:25:01.357368 kernel: ACPI: button: Power Button [PWRF] May 16 00:25:01.372695 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 00:25:01.373069 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 16 00:25:01.373288 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 00:25:01.380411 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 16 00:25:01.548592 kernel: mousedev: PS/2 mouse device common for all mice May 16 00:25:01.612737 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:25:01.775988 kernel: kvm_amd: TSC scaling supported May 16 00:25:01.776110 kernel: kvm_amd: Nested Virtualization enabled May 16 00:25:01.776161 kernel: kvm_amd: Nested Paging enabled May 16 00:25:01.776212 kernel: kvm_amd: LBR virtualization supported May 16 00:25:01.776719 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 00:25:01.777568 kernel: kvm_amd: Virtual GIF supported May 16 00:25:01.996169 kernel: EDAC MC: Ver: 3.0.0 May 16 00:25:02.037367 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 00:25:02.049618 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 00:25:02.105905 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:25:02.154547 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 00:25:02.158510 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:25:02.160116 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:25:02.162127 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 00:25:02.163867 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 00:25:02.165910 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 00:25:02.167513 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 00:25:02.170110 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 00:25:02.171917 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:25:02.171965 systemd[1]: Reached target paths.target - Path Units. May 16 00:25:02.173438 systemd[1]: Reached target timers.target - Timer Units. May 16 00:25:02.176385 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 00:25:02.180561 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 00:25:02.187999 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 00:25:02.193148 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 00:25:02.194955 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 00:25:02.201549 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 00:25:02.203815 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 00:25:02.212120 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 00:25:02.214740 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 00:25:02.216911 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:25:02.225310 systemd[1]: Reached target basic.target - Basic System. May 16 00:25:02.228498 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 00:25:02.229265 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 00:25:02.234005 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:25:02.234576 systemd[1]: Starting containerd.service - containerd container runtime... May 16 00:25:02.257096 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 00:25:02.265000 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 00:25:02.284715 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 00:25:02.289502 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 00:25:02.293884 jq[1480]: false May 16 00:25:02.296071 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 00:25:02.299133 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 00:25:02.302068 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 00:25:02.307538 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 00:25:02.315999 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 00:25:02.318488 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:25:02.319299 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:25:02.322874 systemd[1]: Starting update-engine.service - Update Engine... May 16 00:25:02.325968 extend-filesystems[1481]: Found loop3 May 16 00:25:02.325968 extend-filesystems[1481]: Found loop4 May 16 00:25:02.325968 extend-filesystems[1481]: Found loop5 May 16 00:25:02.325968 extend-filesystems[1481]: Found sr0 May 16 00:25:02.325968 extend-filesystems[1481]: Found vda May 16 00:25:02.325968 extend-filesystems[1481]: Found vda1 May 16 00:25:02.325968 extend-filesystems[1481]: Found vda2 May 16 00:25:02.325968 extend-filesystems[1481]: Found vda3 May 16 00:25:02.325968 extend-filesystems[1481]: Found usr May 16 00:25:02.325968 extend-filesystems[1481]: Found vda4 May 16 00:25:02.325968 extend-filesystems[1481]: Found vda6 May 16 00:25:02.325968 extend-filesystems[1481]: Found vda7 May 16 00:25:02.325968 extend-filesystems[1481]: Found vda9 May 16 00:25:02.325968 extend-filesystems[1481]: Checking size of /dev/vda9 May 16 00:25:02.444973 extend-filesystems[1481]: Resized partition /dev/vda9 May 16 00:25:02.334377 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 00:25:02.450777 extend-filesystems[1513]: resize2fs 1.47.2 (1-Jan-2025) May 16 00:25:02.480281 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:25:02.337514 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 00:25:02.454579 dbus-daemon[1479]: [system] SELinux support is enabled May 16 00:25:02.480936 update_engine[1490]: I20250516 00:25:02.416606 1490 main.cc:92] Flatcar Update Engine starting May 16 00:25:02.480936 update_engine[1490]: I20250516 00:25:02.479266 1490 update_check_scheduler.cc:74] Next update check in 10m10s May 16 00:25:02.341660 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:25:02.342009 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 00:25:02.491197 jq[1492]: true May 16 00:25:02.345915 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:25:02.491579 tar[1495]: linux-amd64/helm May 16 00:25:02.346258 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 00:25:02.375535 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 00:25:02.492418 jq[1505]: true May 16 00:25:02.391239 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:25:02.391708 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 00:25:02.455282 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 00:25:02.462294 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:25:02.462322 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 00:25:02.462752 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:25:02.462769 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 00:25:02.466668 systemd[1]: Started update-engine.service - Update Engine. May 16 00:25:02.476382 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 00:25:02.508372 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1391) May 16 00:25:02.521789 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:25:02.584960 systemd-logind[1488]: Watching system buttons on /dev/input/event1 (Power Button) May 16 00:25:02.585003 systemd-logind[1488]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 00:25:02.585918 systemd-logind[1488]: New seat seat0. May 16 00:25:02.599155 systemd[1]: Started systemd-logind.service - User Login Management. May 16 00:25:02.623732 extend-filesystems[1513]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:25:02.623732 extend-filesystems[1513]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:25:02.623732 extend-filesystems[1513]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:25:02.638098 extend-filesystems[1481]: Resized filesystem in /dev/vda9 May 16 00:25:02.644131 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:25:02.644530 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 00:25:02.656099 locksmithd[1518]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:25:02.666715 bash[1533]: Updated "/home/core/.ssh/authorized_keys" May 16 00:25:02.664981 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 00:25:02.671414 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 00:25:02.872157 containerd[1500]: time="2025-05-16T00:25:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 00:25:02.876884 containerd[1500]: time="2025-05-16T00:25:02.874163647Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 16 00:25:02.889926 containerd[1500]: time="2025-05-16T00:25:02.889165225Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.706µs" May 16 00:25:02.889926 containerd[1500]: time="2025-05-16T00:25:02.889228795Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 00:25:02.889926 containerd[1500]: time="2025-05-16T00:25:02.889269210Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 00:25:02.889926 containerd[1500]: time="2025-05-16T00:25:02.889651598Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 00:25:02.890181 containerd[1500]: time="2025-05-16T00:25:02.889961830Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 00:25:02.890181 containerd[1500]: time="2025-05-16T00:25:02.890023946Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 00:25:02.890181 containerd[1500]: time="2025-05-16T00:25:02.890122120Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 00:25:02.890181 containerd[1500]: time="2025-05-16T00:25:02.890145424Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 00:25:02.890721 containerd[1500]: time="2025-05-16T00:25:02.890643598Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 00:25:02.890721 containerd[1500]: time="2025-05-16T00:25:02.890711145Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 00:25:02.890833 containerd[1500]: time="2025-05-16T00:25:02.890734839Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 00:25:02.890833 containerd[1500]: time="2025-05-16T00:25:02.890754737Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 00:25:02.890966 containerd[1500]: time="2025-05-16T00:25:02.890912182Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 00:25:02.891301 containerd[1500]: time="2025-05-16T00:25:02.891257400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 00:25:02.891371 containerd[1500]: time="2025-05-16T00:25:02.891317853Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 00:25:02.891371 containerd[1500]: time="2025-05-16T00:25:02.891339203Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 00:25:02.891456 containerd[1500]: time="2025-05-16T00:25:02.891398244Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 00:25:02.892796 containerd[1500]: time="2025-05-16T00:25:02.892282593Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 00:25:02.892796 containerd[1500]: time="2025-05-16T00:25:02.892453102Z" level=info msg="metadata content store policy set" policy=shared May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.906996852Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.907091159Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.907112189Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.907128700Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.907147124Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.907163906Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.907180787Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.907198601Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.907213980Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.907230030Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.907243375Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.907273932Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.907505446Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 00:25:02.909752 containerd[1500]: time="2025-05-16T00:25:02.907533719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 00:25:02.910277 containerd[1500]: time="2025-05-16T00:25:02.907553657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 00:25:02.910277 containerd[1500]: time="2025-05-16T00:25:02.907572392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 00:25:02.910277 containerd[1500]: time="2025-05-16T00:25:02.907600374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 00:25:02.910277 containerd[1500]: time="2025-05-16T00:25:02.907623257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 00:25:02.910277 containerd[1500]: time="2025-05-16T00:25:02.907641271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 00:25:02.910277 containerd[1500]: time="2025-05-16T00:25:02.907661058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 00:25:02.910277 containerd[1500]: time="2025-05-16T00:25:02.907677148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 00:25:02.910277 containerd[1500]: time="2025-05-16T00:25:02.907692788Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 00:25:02.910277 containerd[1500]: time="2025-05-16T00:25:02.907706614Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 00:25:02.910277 containerd[1500]: time="2025-05-16T00:25:02.907797564Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 00:25:02.910277 containerd[1500]: time="2025-05-16T00:25:02.907816319Z" level=info msg="Start snapshots syncer" May 16 00:25:02.910277 containerd[1500]: time="2025-05-16T00:25:02.907859901Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 00:25:02.910618 containerd[1500]: time="2025-05-16T00:25:02.908166366Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 00:25:02.910618 containerd[1500]: time="2025-05-16T00:25:02.908229535Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908471869Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908682524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908712380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908731626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908747927Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908765770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908779616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908793933Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908831834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908848005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908862081Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908919769Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908939877Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 00:25:02.910783 containerd[1500]: time="2025-05-16T00:25:02.908953182Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 00:25:02.911131 containerd[1500]: time="2025-05-16T00:25:02.909034184Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 00:25:02.911131 containerd[1500]: time="2025-05-16T00:25:02.909047749Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 00:25:02.911131 containerd[1500]: time="2025-05-16T00:25:02.909062106Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 00:25:02.911131 containerd[1500]: time="2025-05-16T00:25:02.909077134Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 00:25:02.911131 containerd[1500]: time="2025-05-16T00:25:02.909102121Z" level=info msg="runtime interface created" May 16 00:25:02.911131 containerd[1500]: time="2025-05-16T00:25:02.909110216Z" level=info msg="created NRI interface" May 16 00:25:02.911131 containerd[1500]: time="2025-05-16T00:25:02.909129382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 00:25:02.911131 containerd[1500]: time="2025-05-16T00:25:02.909146414Z" level=info msg="Connect containerd service" May 16 00:25:02.911131 containerd[1500]: time="2025-05-16T00:25:02.909176250Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 00:25:02.913583 containerd[1500]: time="2025-05-16T00:25:02.912898743Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:25:03.055645 systemd-networkd[1417]: eth0: Gained IPv6LL May 16 00:25:03.063660 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 00:25:03.070261 systemd[1]: Reached target network-online.target - Network is Online. May 16 00:25:03.082805 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 00:25:03.103104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:25:03.116206 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 00:25:03.167601 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 00:25:03.168787 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 00:25:03.175332 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 00:25:03.181083 sshd_keygen[1506]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:25:03.192654 containerd[1500]: time="2025-05-16T00:25:03.191991711Z" level=info msg="Start subscribing containerd event" May 16 00:25:03.192654 containerd[1500]: time="2025-05-16T00:25:03.192243914Z" level=info msg="Start recovering state" May 16 00:25:03.192654 containerd[1500]: time="2025-05-16T00:25:03.192521715Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:25:03.192976 containerd[1500]: time="2025-05-16T00:25:03.192871421Z" level=info msg="Start event monitor" May 16 00:25:03.192976 containerd[1500]: time="2025-05-16T00:25:03.192915083Z" level=info msg="Start cni network conf syncer for default" May 16 00:25:03.192976 containerd[1500]: time="2025-05-16T00:25:03.192926986Z" level=info msg="Start streaming server" May 16 00:25:03.192976 containerd[1500]: time="2025-05-16T00:25:03.192945370Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 00:25:03.193300 containerd[1500]: time="2025-05-16T00:25:03.193131830Z" level=info msg="runtime interface starting up..." May 16 00:25:03.193300 containerd[1500]: time="2025-05-16T00:25:03.193148020Z" level=info msg="starting plugins..." May 16 00:25:03.193419 containerd[1500]: time="2025-05-16T00:25:03.193401135Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 00:25:03.194231 containerd[1500]: time="2025-05-16T00:25:03.194163655Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:25:03.194743 containerd[1500]: time="2025-05-16T00:25:03.194285824Z" level=info msg="containerd successfully booted in 0.323059s" May 16 00:25:03.194417 systemd[1]: Started containerd.service - containerd container runtime. May 16 00:25:03.216550 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 00:25:03.241799 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 00:25:03.250705 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 00:25:03.283948 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:25:03.284315 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 00:25:03.296969 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 00:25:03.330292 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 00:25:03.336235 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 00:25:03.345287 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 00:25:03.349027 systemd[1]: Reached target getty.target - Login Prompts. May 16 00:25:03.558934 tar[1495]: linux-amd64/LICENSE May 16 00:25:03.558934 tar[1495]: linux-amd64/README.md May 16 00:25:03.596558 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 00:25:04.628603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:25:04.633431 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 00:25:04.644120 systemd[1]: Startup finished in 721ms (kernel) + 7.616s (initrd) + 7.771s (userspace) = 16.108s. May 16 00:25:04.647055 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:25:05.392519 kubelet[1607]: E0516 00:25:05.392435 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:25:05.401594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:25:05.401874 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:25:05.404791 systemd[1]: kubelet.service: Consumed 1.411s CPU time, 268.8M memory peak. May 16 00:25:12.184408 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 00:25:12.202106 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:52044.service - OpenSSH per-connection server daemon (10.0.0.1:52044). May 16 00:25:12.330553 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 52044 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:25:12.333364 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:25:12.349790 systemd-logind[1488]: New session 1 of user core. May 16 00:25:12.353653 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 00:25:12.361271 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 00:25:12.396243 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 00:25:12.402330 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 00:25:12.453250 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:25:12.456805 systemd-logind[1488]: New session c1 of user core. May 16 00:25:12.671392 systemd[1624]: Queued start job for default target default.target. May 16 00:25:12.685388 systemd[1624]: Created slice app.slice - User Application Slice. May 16 00:25:12.685426 systemd[1624]: Reached target paths.target - Paths. May 16 00:25:12.689006 systemd[1624]: Reached target timers.target - Timers. May 16 00:25:12.693103 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 00:25:12.719145 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 00:25:12.719390 systemd[1624]: Reached target sockets.target - Sockets. May 16 00:25:12.719468 systemd[1624]: Reached target basic.target - Basic System. May 16 00:25:12.719546 systemd[1624]: Reached target default.target - Main User Target. May 16 00:25:12.719599 systemd[1624]: Startup finished in 252ms. May 16 00:25:12.723238 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 00:25:12.750725 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 00:25:12.850619 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:52050.service - OpenSSH per-connection server daemon (10.0.0.1:52050). May 16 00:25:12.930711 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 52050 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:25:12.932618 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:25:12.942432 systemd-logind[1488]: New session 2 of user core. May 16 00:25:12.952704 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 00:25:13.013885 sshd[1637]: Connection closed by 10.0.0.1 port 52050 May 16 00:25:13.015638 sshd-session[1635]: pam_unix(sshd:session): session closed for user core May 16 00:25:13.032908 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:52050.service: Deactivated successfully. May 16 00:25:13.037015 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:25:13.043944 systemd-logind[1488]: Session 2 logged out. Waiting for processes to exit. May 16 00:25:13.047486 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:52064.service - OpenSSH per-connection server daemon (10.0.0.1:52064). May 16 00:25:13.052769 systemd-logind[1488]: Removed session 2. May 16 00:25:13.131586 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 52064 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:25:13.138516 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:25:13.167013 systemd-logind[1488]: New session 3 of user core. May 16 00:25:13.188172 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 00:25:13.261706 sshd[1645]: Connection closed by 10.0.0.1 port 52064 May 16 00:25:13.262574 sshd-session[1642]: pam_unix(sshd:session): session closed for user core May 16 00:25:13.278095 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:52064.service: Deactivated successfully. May 16 00:25:13.280949 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:25:13.290957 systemd-logind[1488]: Session 3 logged out. Waiting for processes to exit. May 16 00:25:13.300329 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:52072.service - OpenSSH per-connection server daemon (10.0.0.1:52072). May 16 00:25:13.303725 systemd-logind[1488]: Removed session 3. May 16 00:25:13.387954 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 52072 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:25:13.403744 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:25:13.412375 systemd-logind[1488]: New session 4 of user core. May 16 00:25:13.421715 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 00:25:13.497941 sshd[1653]: Connection closed by 10.0.0.1 port 52072 May 16 00:25:13.500189 sshd-session[1650]: pam_unix(sshd:session): session closed for user core May 16 00:25:13.521087 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:52072.service: Deactivated successfully. May 16 00:25:13.523931 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:25:13.524995 systemd-logind[1488]: Session 4 logged out. Waiting for processes to exit. May 16 00:25:13.533055 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:52080.service - OpenSSH per-connection server daemon (10.0.0.1:52080). May 16 00:25:13.534858 systemd-logind[1488]: Removed session 4. May 16 00:25:13.616465 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 52080 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:25:13.619238 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:25:13.653172 systemd-logind[1488]: New session 5 of user core. May 16 00:25:13.660677 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 00:25:13.741120 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 00:25:13.741616 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:25:13.781778 sudo[1662]: pam_unix(sudo:session): session closed for user root May 16 00:25:13.785456 sshd[1661]: Connection closed by 10.0.0.1 port 52080 May 16 00:25:13.787593 sshd-session[1658]: pam_unix(sshd:session): session closed for user core May 16 00:25:13.809529 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:52080.service: Deactivated successfully. May 16 00:25:13.815671 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:25:13.818899 systemd-logind[1488]: Session 5 logged out. Waiting for processes to exit. May 16 00:25:13.826175 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:52082.service - OpenSSH per-connection server daemon (10.0.0.1:52082). May 16 00:25:13.830672 systemd-logind[1488]: Removed session 5. May 16 00:25:13.936903 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 52082 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:25:13.939613 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:25:13.946325 systemd-logind[1488]: New session 6 of user core. May 16 00:25:13.959772 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 00:25:14.024761 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 00:25:14.026967 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:25:14.051702 sudo[1672]: pam_unix(sudo:session): session closed for user root May 16 00:25:14.064490 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 00:25:14.068106 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:25:14.132915 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:25:14.234328 augenrules[1694]: No rules May 16 00:25:14.240209 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:25:14.245046 sudo[1671]: pam_unix(sudo:session): session closed for user root May 16 00:25:14.240639 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:25:14.248738 sshd[1670]: Connection closed by 10.0.0.1 port 52082 May 16 00:25:14.249440 sshd-session[1667]: pam_unix(sshd:session): session closed for user core May 16 00:25:14.279609 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:52082.service: Deactivated successfully. May 16 00:25:14.284247 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:25:14.295329 systemd-logind[1488]: Session 6 logged out. Waiting for processes to exit. May 16 00:25:14.307999 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:52084.service - OpenSSH per-connection server daemon (10.0.0.1:52084). May 16 00:25:14.316336 systemd-logind[1488]: Removed session 6. May 16 00:25:14.388612 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 52084 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:25:14.391023 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:25:14.433913 systemd-logind[1488]: New session 7 of user core. May 16 00:25:14.441119 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 00:25:14.509553 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:25:14.510037 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:25:15.173658 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 00:25:15.181080 (dockerd)[1728]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 00:25:15.602584 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:25:15.609095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:25:15.647082 dockerd[1728]: time="2025-05-16T00:25:15.645330686Z" level=info msg="Starting up" May 16 00:25:15.657062 dockerd[1728]: time="2025-05-16T00:25:15.655281911Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 00:25:15.927989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:25:15.941971 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:25:16.063546 kubelet[1761]: E0516 00:25:16.055262 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:25:16.076093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:25:16.076323 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:25:16.076755 systemd[1]: kubelet.service: Consumed 304ms CPU time, 112.8M memory peak. May 16 00:25:16.196132 dockerd[1728]: time="2025-05-16T00:25:16.195969046Z" level=info msg="Loading containers: start." May 16 00:25:16.636565 kernel: Initializing XFRM netlink socket May 16 00:25:16.902144 systemd-networkd[1417]: docker0: Link UP May 16 00:25:17.010654 dockerd[1728]: time="2025-05-16T00:25:17.010579322Z" level=info msg="Loading containers: done." May 16 00:25:17.058196 dockerd[1728]: time="2025-05-16T00:25:17.058096420Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 00:25:17.058448 dockerd[1728]: time="2025-05-16T00:25:17.058227787Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 16 00:25:17.058448 dockerd[1728]: time="2025-05-16T00:25:17.058428303Z" level=info msg="Daemon has completed initialization" May 16 00:25:17.137477 dockerd[1728]: time="2025-05-16T00:25:17.137416522Z" level=info msg="API listen on /run/docker.sock" May 16 00:25:17.137552 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 00:25:18.218770 containerd[1500]: time="2025-05-16T00:25:18.218704569Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 16 00:25:19.205525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount853667673.mount: Deactivated successfully. May 16 00:25:22.386835 containerd[1500]: time="2025-05-16T00:25:22.386736817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:22.389162 containerd[1500]: time="2025-05-16T00:25:22.389047601Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 16 00:25:22.390904 containerd[1500]: time="2025-05-16T00:25:22.390849541Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:22.395207 containerd[1500]: time="2025-05-16T00:25:22.395128527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:22.401746 containerd[1500]: time="2025-05-16T00:25:22.399731160Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 4.180920001s" May 16 00:25:22.401746 containerd[1500]: time="2025-05-16T00:25:22.399815188Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 16 00:25:22.403002 containerd[1500]: time="2025-05-16T00:25:22.402715127Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 16 00:25:25.187390 containerd[1500]: time="2025-05-16T00:25:25.186026123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:25.195597 containerd[1500]: time="2025-05-16T00:25:25.190326269Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 16 00:25:25.195597 containerd[1500]: time="2025-05-16T00:25:25.193260403Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:25.205083 containerd[1500]: time="2025-05-16T00:25:25.202834150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:25.205083 containerd[1500]: time="2025-05-16T00:25:25.204606074Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 2.801838499s" May 16 00:25:25.205083 containerd[1500]: time="2025-05-16T00:25:25.204645718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 16 00:25:25.205815 containerd[1500]: time="2025-05-16T00:25:25.205458122Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 16 00:25:26.105846 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 00:25:26.121569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:25:26.455626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:25:26.474944 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:25:26.578694 kubelet[2018]: E0516 00:25:26.578587 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:25:26.586870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:25:26.587139 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:25:26.587684 systemd[1]: kubelet.service: Consumed 369ms CPU time, 112.5M memory peak. May 16 00:25:29.828975 containerd[1500]: time="2025-05-16T00:25:29.827839669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:29.864367 containerd[1500]: time="2025-05-16T00:25:29.864212644Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 16 00:25:29.867958 containerd[1500]: time="2025-05-16T00:25:29.867875915Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:29.874994 containerd[1500]: time="2025-05-16T00:25:29.874913355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:29.877552 containerd[1500]: time="2025-05-16T00:25:29.877492173Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 4.671995287s" May 16 00:25:29.877552 containerd[1500]: time="2025-05-16T00:25:29.877542577Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 16 00:25:29.878302 containerd[1500]: time="2025-05-16T00:25:29.878186695Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 00:25:31.859099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862053233.mount: Deactivated successfully. May 16 00:25:33.320705 containerd[1500]: time="2025-05-16T00:25:33.319485819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:33.323420 containerd[1500]: time="2025-05-16T00:25:33.323296647Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 16 00:25:33.325792 containerd[1500]: time="2025-05-16T00:25:33.325709923Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:33.332142 containerd[1500]: time="2025-05-16T00:25:33.332048703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:33.333070 containerd[1500]: time="2025-05-16T00:25:33.333009165Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 3.454789568s" May 16 00:25:33.333070 containerd[1500]: time="2025-05-16T00:25:33.333055181Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 16 00:25:33.334205 containerd[1500]: time="2025-05-16T00:25:33.333947645Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 00:25:34.306971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269161599.mount: Deactivated successfully. May 16 00:25:36.602685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 16 00:25:36.608319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:25:36.921949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:25:36.941121 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:25:37.081092 kubelet[2100]: E0516 00:25:37.075906 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:25:37.088147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:25:37.088403 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:25:37.088973 systemd[1]: kubelet.service: Consumed 301ms CPU time, 112.8M memory peak. May 16 00:25:37.619395 containerd[1500]: time="2025-05-16T00:25:37.618305901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:37.622842 containerd[1500]: time="2025-05-16T00:25:37.622735481Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 16 00:25:37.624625 containerd[1500]: time="2025-05-16T00:25:37.624568416Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:37.632459 containerd[1500]: time="2025-05-16T00:25:37.631306594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:37.633438 containerd[1500]: time="2025-05-16T00:25:37.633336882Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.299346155s" May 16 00:25:37.633438 containerd[1500]: time="2025-05-16T00:25:37.633401607Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 00:25:37.636404 containerd[1500]: time="2025-05-16T00:25:37.634385295Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 00:25:38.499575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount82658944.mount: Deactivated successfully. May 16 00:25:38.533910 containerd[1500]: time="2025-05-16T00:25:38.532942639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:25:38.535837 containerd[1500]: time="2025-05-16T00:25:38.534187398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 00:25:38.535837 containerd[1500]: time="2025-05-16T00:25:38.535632115Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:25:38.538672 containerd[1500]: time="2025-05-16T00:25:38.538550764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:25:38.539584 containerd[1500]: time="2025-05-16T00:25:38.539402873Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 904.984995ms" May 16 00:25:38.539584 containerd[1500]: time="2025-05-16T00:25:38.539438372Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 00:25:38.540490 containerd[1500]: time="2025-05-16T00:25:38.540222630Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 16 00:25:39.435504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1968543535.mount: Deactivated successfully. May 16 00:25:41.816561 kernel: hrtimer: interrupt took 3229757 ns May 16 00:25:44.201200 containerd[1500]: time="2025-05-16T00:25:44.199220634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:44.210595 containerd[1500]: time="2025-05-16T00:25:44.210405959Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 16 00:25:44.212582 containerd[1500]: time="2025-05-16T00:25:44.212428344Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:44.223375 containerd[1500]: time="2025-05-16T00:25:44.221740127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:25:44.227446 containerd[1500]: time="2025-05-16T00:25:44.225922742Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.685666619s" May 16 00:25:44.227446 containerd[1500]: time="2025-05-16T00:25:44.225983519Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 16 00:25:47.102635 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 16 00:25:47.114834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:25:47.442662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:25:47.446307 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:25:47.466769 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:25:47.474326 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:25:47.476638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:25:47.478437 systemd[1]: kubelet.service: Consumed 242ms CPU time, 104.8M memory peak. May 16 00:25:47.495524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:25:47.565723 systemd[1]: Reload requested from client PID 2212 ('systemctl') (unit session-7.scope)... May 16 00:25:47.565754 systemd[1]: Reloading... May 16 00:25:47.578252 update_engine[1490]: I20250516 00:25:47.573240 1490 update_attempter.cc:509] Updating boot flags... May 16 00:25:47.724599 zram_generator::config[2259]: No configuration found. May 16 00:25:48.538702 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2296) May 16 00:25:49.088531 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:25:49.139374 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2294) May 16 00:25:49.193384 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2294) May 16 00:25:49.248272 systemd[1]: Reloading finished in 1681 ms. May 16 00:25:49.317981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:25:49.337045 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:25:49.390002 kubelet[2311]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:25:49.390002 kubelet[2311]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:25:49.390002 kubelet[2311]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:25:49.390002 kubelet[2311]: I0516 00:25:49.389651 2311 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:25:49.409897 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:25:49.421002 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:25:49.421455 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:25:49.421527 systemd[1]: kubelet.service: Consumed 294ms CPU time, 113.2M memory peak. May 16 00:25:49.425549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:25:49.654770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:25:49.673020 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:25:49.722073 kubelet[2329]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:25:49.722073 kubelet[2329]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:25:49.722073 kubelet[2329]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:25:49.722583 kubelet[2329]: I0516 00:25:49.722095 2329 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:25:49.994645 kubelet[2329]: I0516 00:25:49.994477 2329 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 00:25:49.994645 kubelet[2329]: I0516 00:25:49.994521 2329 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:25:49.994851 kubelet[2329]: I0516 00:25:49.994824 2329 server.go:934] "Client rotation is on, will bootstrap in background" May 16 00:25:50.015952 kubelet[2329]: E0516 00:25:50.015907 2329 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 16 00:25:50.017221 kubelet[2329]: I0516 00:25:50.017171 2329 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:25:50.025778 kubelet[2329]: I0516 00:25:50.025724 2329 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 00:25:50.032299 kubelet[2329]: I0516 00:25:50.032269 2329 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:25:50.032553 kubelet[2329]: I0516 00:25:50.032523 2329 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 00:25:50.032728 kubelet[2329]: I0516 00:25:50.032679 2329 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:25:50.032930 kubelet[2329]: I0516 00:25:50.032713 2329 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:25:50.032930 kubelet[2329]: I0516 00:25:50.032930 2329 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:25:50.033082 kubelet[2329]: I0516 00:25:50.032938 2329 container_manager_linux.go:300] "Creating device plugin manager" May 16 00:25:50.033082 kubelet[2329]: I0516 00:25:50.033068 2329 state_mem.go:36] "Initialized new in-memory state store" May 16 00:25:50.035472 kubelet[2329]: I0516 00:25:50.035417 2329 kubelet.go:408] "Attempting to sync node with API server" May 16 00:25:50.035472 kubelet[2329]: I0516 00:25:50.035451 2329 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:25:50.035472 kubelet[2329]: I0516 00:25:50.035490 2329 kubelet.go:314] "Adding apiserver pod source" May 16 00:25:50.035715 kubelet[2329]: I0516 00:25:50.035514 2329 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:25:50.041749 kubelet[2329]: W0516 00:25:50.041679 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 16 00:25:50.041903 kubelet[2329]: E0516 00:25:50.041753 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 16 00:25:50.042016 kubelet[2329]: I0516 00:25:50.041972 2329 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 16 00:25:50.042495 kubelet[2329]: I0516 00:25:50.042462 2329 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:25:50.042923 kubelet[2329]: W0516 00:25:50.042861 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 16 00:25:50.042974 kubelet[2329]: E0516 00:25:50.042951 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 16 00:25:50.043030 kubelet[2329]: W0516 00:25:50.043011 2329 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:25:50.045242 kubelet[2329]: I0516 00:25:50.045194 2329 server.go:1274] "Started kubelet" May 16 00:25:50.045869 kubelet[2329]: I0516 00:25:50.045785 2329 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:25:50.046436 kubelet[2329]: I0516 00:25:50.046389 2329 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:25:50.046501 kubelet[2329]: I0516 00:25:50.046464 2329 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:25:50.047415 kubelet[2329]: I0516 00:25:50.046872 2329 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:25:50.048811 kubelet[2329]: I0516 00:25:50.047927 2329 server.go:449] "Adding debug handlers to kubelet server" May 16 00:25:50.050369 kubelet[2329]: I0516 00:25:50.050316 2329 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:25:50.053523 kubelet[2329]: I0516 00:25:50.052068 2329 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 00:25:50.053523 kubelet[2329]: I0516 00:25:50.052202 2329 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 00:25:50.053523 kubelet[2329]: I0516 00:25:50.052254 2329 reconciler.go:26] "Reconciler: start to sync state" May 16 00:25:50.053523 kubelet[2329]: E0516 00:25:50.050533 2329 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fda4046e3f96b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:25:50.045149547 +0000 UTC m=+0.366894988,LastTimestamp:2025-05-16 00:25:50.045149547 +0000 UTC m=+0.366894988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:25:50.053523 kubelet[2329]: E0516 00:25:50.052517 2329 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:25:50.053523 kubelet[2329]: W0516 00:25:50.052950 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 16 00:25:50.053523 kubelet[2329]: E0516 00:25:50.053008 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 16 00:25:50.056490 kubelet[2329]: E0516 00:25:50.054639 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:25:50.056490 kubelet[2329]: E0516 00:25:50.054556 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" May 16 00:25:50.056490 kubelet[2329]: I0516 00:25:50.055386 2329 factory.go:221] Registration of the containerd container factory successfully May 16 00:25:50.056490 kubelet[2329]: I0516 00:25:50.055401 2329 factory.go:221] Registration of the systemd container factory successfully May 16 00:25:50.056490 kubelet[2329]: I0516 00:25:50.055501 2329 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:25:50.071419 kubelet[2329]: I0516 00:25:50.071385 2329 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 00:25:50.071419 kubelet[2329]: I0516 00:25:50.071405 2329 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 00:25:50.071419 kubelet[2329]: I0516 00:25:50.071420 2329 state_mem.go:36] "Initialized new in-memory state store" May 16 00:25:50.075199 kubelet[2329]: I0516 00:25:50.075130 2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:25:50.076902 kubelet[2329]: I0516 00:25:50.076865 2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:25:50.076961 kubelet[2329]: I0516 00:25:50.076915 2329 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 00:25:50.076961 kubelet[2329]: I0516 00:25:50.076942 2329 kubelet.go:2321] "Starting kubelet main sync loop" May 16 00:25:50.077040 kubelet[2329]: E0516 00:25:50.077008 2329 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:25:50.154994 kubelet[2329]: E0516 00:25:50.154912 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:25:50.177471 kubelet[2329]: E0516 00:25:50.177422 2329 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 00:25:50.255194 kubelet[2329]: E0516 00:25:50.254997 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:25:50.255365 kubelet[2329]: E0516 00:25:50.255304 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" May 16 00:25:50.355853 kubelet[2329]: E0516 00:25:50.355767 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:25:50.378093 kubelet[2329]: E0516 00:25:50.377977 2329 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 00:25:50.456693 kubelet[2329]: E0516 00:25:50.456584 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:25:50.557064 kubelet[2329]: E0516 00:25:50.556883 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:25:50.655992 kubelet[2329]: E0516 00:25:50.655899 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" May 16 00:25:50.658029 kubelet[2329]: E0516 00:25:50.657961 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:25:50.759099 kubelet[2329]: E0516 00:25:50.759010 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:25:50.778271 kubelet[2329]: E0516 00:25:50.778189 2329 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 00:25:50.859825 kubelet[2329]: E0516 00:25:50.859760 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:25:50.925837 kubelet[2329]: W0516 00:25:50.925743 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 16 00:25:50.925837 kubelet[2329]: E0516 00:25:50.925834 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 16 00:25:50.960861 kubelet[2329]: E0516 00:25:50.960791 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:25:50.990191 kubelet[2329]: I0516 00:25:50.990108 2329 policy_none.go:49] "None policy: Start" May 16 00:25:50.990997 kubelet[2329]: I0516 00:25:50.990967 2329 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 00:25:50.990997 kubelet[2329]: I0516 00:25:50.990988 2329 state_mem.go:35] "Initializing new in-memory state store" May 16 00:25:51.061382 kubelet[2329]: E0516 00:25:51.061277 2329 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:25:51.073598 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 00:25:51.090069 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 00:25:51.092969 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 00:25:51.100543 kubelet[2329]: I0516 00:25:51.100502 2329 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:25:51.100935 kubelet[2329]: I0516 00:25:51.100913 2329 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:25:51.100987 kubelet[2329]: I0516 00:25:51.100932 2329 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:25:51.101525 kubelet[2329]: I0516 00:25:51.101268 2329 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:25:51.102325 kubelet[2329]: E0516 00:25:51.102303 2329 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 00:25:51.202833 kubelet[2329]: I0516 00:25:51.202709 2329 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:25:51.203397 kubelet[2329]: E0516 00:25:51.203322 2329 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" May 16 00:25:51.405098 kubelet[2329]: I0516 00:25:51.405059 2329 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:25:51.405455 kubelet[2329]: E0516 00:25:51.405414 2329 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" May 16 00:25:51.449266 kubelet[2329]: W0516 00:25:51.449181 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 16 00:25:51.449378 kubelet[2329]: E0516 00:25:51.449276 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 16 00:25:51.457188 kubelet[2329]: E0516 00:25:51.457042 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" May 16 00:25:51.484010 kubelet[2329]: W0516 00:25:51.483916 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 16 00:25:51.484010 kubelet[2329]: E0516 00:25:51.484012 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 16 00:25:51.521897 kubelet[2329]: W0516 00:25:51.521833 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 16 00:25:51.521897 kubelet[2329]: E0516 00:25:51.521887 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 16 00:25:51.587389 systemd[1]: Created slice kubepods-burstable-pod07bead2622bc50ce281897edc217bd5d.slice - libcontainer container kubepods-burstable-pod07bead2622bc50ce281897edc217bd5d.slice. May 16 00:25:51.598693 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 16 00:25:51.609394 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 16 00:25:51.662493 kubelet[2329]: I0516 00:25:51.662419 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07bead2622bc50ce281897edc217bd5d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"07bead2622bc50ce281897edc217bd5d\") " pod="kube-system/kube-apiserver-localhost" May 16 00:25:51.662493 kubelet[2329]: I0516 00:25:51.662480 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07bead2622bc50ce281897edc217bd5d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"07bead2622bc50ce281897edc217bd5d\") " pod="kube-system/kube-apiserver-localhost" May 16 00:25:51.662493 kubelet[2329]: I0516 00:25:51.662502 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07bead2622bc50ce281897edc217bd5d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"07bead2622bc50ce281897edc217bd5d\") " pod="kube-system/kube-apiserver-localhost" May 16 00:25:51.662726 kubelet[2329]: I0516 00:25:51.662523 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:25:51.662726 kubelet[2329]: I0516 00:25:51.662545 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:25:51.662726 kubelet[2329]: I0516 00:25:51.662564 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:25:51.662726 kubelet[2329]: I0516 00:25:51.662588 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:25:51.662726 kubelet[2329]: I0516 00:25:51.662610 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:25:51.662880 kubelet[2329]: I0516 00:25:51.662629 2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 00:25:51.772560 kubelet[2329]: W0516 00:25:51.772440 2329 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused May 16 00:25:51.772560 kubelet[2329]: E0516 00:25:51.772487 2329 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 16 00:25:51.807091 kubelet[2329]: I0516 00:25:51.807014 2329 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:25:51.807427 kubelet[2329]: E0516 00:25:51.807392 2329 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" May 16 00:25:51.897266 kubelet[2329]: E0516 00:25:51.897213 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:51.898137 containerd[1500]: time="2025-05-16T00:25:51.898071557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:07bead2622bc50ce281897edc217bd5d,Namespace:kube-system,Attempt:0,}" May 16 00:25:51.907436 kubelet[2329]: E0516 00:25:51.907380 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:51.907978 containerd[1500]: time="2025-05-16T00:25:51.907926585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 16 00:25:51.912266 kubelet[2329]: E0516 00:25:51.912228 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:51.912715 containerd[1500]: time="2025-05-16T00:25:51.912677550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 16 00:25:51.967437 containerd[1500]: time="2025-05-16T00:25:51.966472523Z" level=info msg="connecting to shim f19a34a3624d8cd8254abae28917f6a8180b1c7ea41e3ba280b8e946f6cc5510" address="unix:///run/containerd/s/08bf4c1ba241a44435cc07376025cb59c55735f93a3951b06423509bb24865d9" namespace=k8s.io protocol=ttrpc version=3 May 16 00:25:51.967437 containerd[1500]: time="2025-05-16T00:25:51.966557555Z" level=info msg="connecting to shim ed0879df5f5fee1cb85a4287db0879423ba0c86877504d4b8191bab6c49750ca" address="unix:///run/containerd/s/8b52f09624756e952f85be1d994b5c6fc68536e5ee344fca99401d9d239cc199" namespace=k8s.io protocol=ttrpc version=3 May 16 00:25:51.967437 containerd[1500]: time="2025-05-16T00:25:51.966657194Z" level=info msg="connecting to shim 964bfb0ebacb84ccb10b343b683ecc35eb52851a3ada82fb98e481f309338c85" address="unix:///run/containerd/s/d00aede99e8356f44abde553ae57291f0732fdfbb435c615a2e85d0a01ab7a07" namespace=k8s.io protocol=ttrpc version=3 May 16 00:25:51.998498 systemd[1]: Started cri-containerd-964bfb0ebacb84ccb10b343b683ecc35eb52851a3ada82fb98e481f309338c85.scope - libcontainer container 964bfb0ebacb84ccb10b343b683ecc35eb52851a3ada82fb98e481f309338c85. May 16 00:25:52.003490 systemd[1]: Started cri-containerd-ed0879df5f5fee1cb85a4287db0879423ba0c86877504d4b8191bab6c49750ca.scope - libcontainer container ed0879df5f5fee1cb85a4287db0879423ba0c86877504d4b8191bab6c49750ca. May 16 00:25:52.005957 systemd[1]: Started cri-containerd-f19a34a3624d8cd8254abae28917f6a8180b1c7ea41e3ba280b8e946f6cc5510.scope - libcontainer container f19a34a3624d8cd8254abae28917f6a8180b1c7ea41e3ba280b8e946f6cc5510. May 16 00:25:52.084494 containerd[1500]: time="2025-05-16T00:25:52.083782291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:07bead2622bc50ce281897edc217bd5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f19a34a3624d8cd8254abae28917f6a8180b1c7ea41e3ba280b8e946f6cc5510\"" May 16 00:25:52.085425 kubelet[2329]: E0516 00:25:52.085393 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:52.087294 containerd[1500]: time="2025-05-16T00:25:52.087257403Z" level=info msg="CreateContainer within sandbox \"f19a34a3624d8cd8254abae28917f6a8180b1c7ea41e3ba280b8e946f6cc5510\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 00:25:52.125570 kubelet[2329]: E0516 00:25:52.125505 2329 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" May 16 00:25:52.253044 containerd[1500]: time="2025-05-16T00:25:52.252999950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"964bfb0ebacb84ccb10b343b683ecc35eb52851a3ada82fb98e481f309338c85\"" May 16 00:25:52.253688 kubelet[2329]: E0516 00:25:52.253666 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:52.255115 containerd[1500]: time="2025-05-16T00:25:52.255088579Z" level=info msg="CreateContainer within sandbox \"964bfb0ebacb84ccb10b343b683ecc35eb52851a3ada82fb98e481f309338c85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 00:25:52.263050 containerd[1500]: time="2025-05-16T00:25:52.262985642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed0879df5f5fee1cb85a4287db0879423ba0c86877504d4b8191bab6c49750ca\"" May 16 00:25:52.263911 kubelet[2329]: E0516 00:25:52.263876 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:52.265486 containerd[1500]: time="2025-05-16T00:25:52.265452820Z" level=info msg="CreateContainer within sandbox \"ed0879df5f5fee1cb85a4287db0879423ba0c86877504d4b8191bab6c49750ca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 00:25:52.310843 containerd[1500]: time="2025-05-16T00:25:52.310775578Z" level=info msg="Container 6d606ef896d5ec3efa805b0fa2099f6217073e077496fc55c18b2b4974d1e5f3: CDI devices from CRI Config.CDIDevices: []" May 16 00:25:52.359006 containerd[1500]: time="2025-05-16T00:25:52.358951730Z" level=info msg="Container 034632a975266824d27a25225430f70ac3786e4e2854f4f1ba6406f26c1cddf0: CDI devices from CRI Config.CDIDevices: []" May 16 00:25:52.361186 containerd[1500]: time="2025-05-16T00:25:52.361104289Z" level=info msg="Container 616ee4629b14a14012eadef5f1b4d4ea3412f7fd8704c0e37bba706dace49bbe: CDI devices from CRI Config.CDIDevices: []" May 16 00:25:52.364913 containerd[1500]: time="2025-05-16T00:25:52.364764514Z" level=info msg="CreateContainer within sandbox \"f19a34a3624d8cd8254abae28917f6a8180b1c7ea41e3ba280b8e946f6cc5510\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6d606ef896d5ec3efa805b0fa2099f6217073e077496fc55c18b2b4974d1e5f3\"" May 16 00:25:52.365497 containerd[1500]: time="2025-05-16T00:25:52.365469724Z" level=info msg="StartContainer for \"6d606ef896d5ec3efa805b0fa2099f6217073e077496fc55c18b2b4974d1e5f3\"" May 16 00:25:52.369280 containerd[1500]: time="2025-05-16T00:25:52.369238274Z" level=info msg="connecting to shim 6d606ef896d5ec3efa805b0fa2099f6217073e077496fc55c18b2b4974d1e5f3" address="unix:///run/containerd/s/08bf4c1ba241a44435cc07376025cb59c55735f93a3951b06423509bb24865d9" protocol=ttrpc version=3 May 16 00:25:52.371336 containerd[1500]: time="2025-05-16T00:25:52.371296064Z" level=info msg="CreateContainer within sandbox \"964bfb0ebacb84ccb10b343b683ecc35eb52851a3ada82fb98e481f309338c85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"616ee4629b14a14012eadef5f1b4d4ea3412f7fd8704c0e37bba706dace49bbe\"" May 16 00:25:52.371943 containerd[1500]: time="2025-05-16T00:25:52.371899530Z" level=info msg="StartContainer for \"616ee4629b14a14012eadef5f1b4d4ea3412f7fd8704c0e37bba706dace49bbe\"" May 16 00:25:52.373284 containerd[1500]: time="2025-05-16T00:25:52.373257049Z" level=info msg="CreateContainer within sandbox \"ed0879df5f5fee1cb85a4287db0879423ba0c86877504d4b8191bab6c49750ca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"034632a975266824d27a25225430f70ac3786e4e2854f4f1ba6406f26c1cddf0\"" May 16 00:25:52.373494 containerd[1500]: time="2025-05-16T00:25:52.373387797Z" level=info msg="connecting to shim 616ee4629b14a14012eadef5f1b4d4ea3412f7fd8704c0e37bba706dace49bbe" address="unix:///run/containerd/s/d00aede99e8356f44abde553ae57291f0732fdfbb435c615a2e85d0a01ab7a07" protocol=ttrpc version=3 May 16 00:25:52.373716 containerd[1500]: time="2025-05-16T00:25:52.373679581Z" level=info msg="StartContainer for \"034632a975266824d27a25225430f70ac3786e4e2854f4f1ba6406f26c1cddf0\"" May 16 00:25:52.375424 containerd[1500]: time="2025-05-16T00:25:52.374885523Z" level=info msg="connecting to shim 034632a975266824d27a25225430f70ac3786e4e2854f4f1ba6406f26c1cddf0" address="unix:///run/containerd/s/8b52f09624756e952f85be1d994b5c6fc68536e5ee344fca99401d9d239cc199" protocol=ttrpc version=3 May 16 00:25:52.392549 systemd[1]: Started cri-containerd-6d606ef896d5ec3efa805b0fa2099f6217073e077496fc55c18b2b4974d1e5f3.scope - libcontainer container 6d606ef896d5ec3efa805b0fa2099f6217073e077496fc55c18b2b4974d1e5f3. May 16 00:25:52.397509 systemd[1]: Started cri-containerd-616ee4629b14a14012eadef5f1b4d4ea3412f7fd8704c0e37bba706dace49bbe.scope - libcontainer container 616ee4629b14a14012eadef5f1b4d4ea3412f7fd8704c0e37bba706dace49bbe. May 16 00:25:52.402699 systemd[1]: Started cri-containerd-034632a975266824d27a25225430f70ac3786e4e2854f4f1ba6406f26c1cddf0.scope - libcontainer container 034632a975266824d27a25225430f70ac3786e4e2854f4f1ba6406f26c1cddf0. May 16 00:25:52.612957 kubelet[2329]: I0516 00:25:52.612573 2329 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:25:52.619672 containerd[1500]: time="2025-05-16T00:25:52.619604229Z" level=info msg="StartContainer for \"034632a975266824d27a25225430f70ac3786e4e2854f4f1ba6406f26c1cddf0\" returns successfully" May 16 00:25:52.619872 containerd[1500]: time="2025-05-16T00:25:52.619789600Z" level=info msg="StartContainer for \"616ee4629b14a14012eadef5f1b4d4ea3412f7fd8704c0e37bba706dace49bbe\" returns successfully" May 16 00:25:52.621088 containerd[1500]: time="2025-05-16T00:25:52.621045747Z" level=info msg="StartContainer for \"6d606ef896d5ec3efa805b0fa2099f6217073e077496fc55c18b2b4974d1e5f3\" returns successfully" May 16 00:25:53.094128 kubelet[2329]: E0516 00:25:53.093968 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:53.098394 kubelet[2329]: E0516 00:25:53.097907 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:53.099297 kubelet[2329]: E0516 00:25:53.099267 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:53.529399 kubelet[2329]: E0516 00:25:53.529358 2329 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 00:25:53.643834 kubelet[2329]: I0516 00:25:53.643781 2329 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 00:25:53.643834 kubelet[2329]: E0516 00:25:53.643821 2329 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 00:25:54.038241 kubelet[2329]: I0516 00:25:54.038178 2329 apiserver.go:52] "Watching apiserver" May 16 00:25:54.053091 kubelet[2329]: I0516 00:25:54.053058 2329 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 00:25:54.105500 kubelet[2329]: E0516 00:25:54.105450 2329 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 00:25:54.105999 kubelet[2329]: E0516 00:25:54.105649 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:55.250773 kubelet[2329]: E0516 00:25:55.250731 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:56.102506 kubelet[2329]: E0516 00:25:56.102477 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:57.102943 systemd[1]: Reload requested from client PID 2600 ('systemctl') (unit session-7.scope)... May 16 00:25:57.102961 systemd[1]: Reloading... May 16 00:25:57.194419 zram_generator::config[2647]: No configuration found. May 16 00:25:57.311911 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:25:57.432463 systemd[1]: Reloading finished in 328 ms. May 16 00:25:57.459474 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:25:57.477719 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:25:57.478041 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:25:57.478108 systemd[1]: kubelet.service: Consumed 982ms CPU time, 132.4M memory peak. May 16 00:25:57.480111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:25:57.835168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:25:57.840381 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:25:57.883979 kubelet[2689]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:25:57.883979 kubelet[2689]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:25:57.883979 kubelet[2689]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:25:57.884455 kubelet[2689]: I0516 00:25:57.884074 2689 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:25:57.892483 kubelet[2689]: I0516 00:25:57.892439 2689 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 00:25:57.892483 kubelet[2689]: I0516 00:25:57.892474 2689 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:25:57.892857 kubelet[2689]: I0516 00:25:57.892826 2689 server.go:934] "Client rotation is on, will bootstrap in background" May 16 00:25:57.894765 kubelet[2689]: I0516 00:25:57.894732 2689 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 00:25:57.898106 kubelet[2689]: I0516 00:25:57.898062 2689 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:25:57.904651 kubelet[2689]: I0516 00:25:57.904622 2689 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 00:25:57.909084 kubelet[2689]: I0516 00:25:57.909052 2689 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:25:57.909210 kubelet[2689]: I0516 00:25:57.909185 2689 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 00:25:57.909339 kubelet[2689]: I0516 00:25:57.909298 2689 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:25:57.909521 kubelet[2689]: I0516 00:25:57.909328 2689 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:25:57.909521 kubelet[2689]: I0516 00:25:57.909521 2689 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:25:57.909631 kubelet[2689]: I0516 00:25:57.909530 2689 container_manager_linux.go:300] "Creating device plugin manager" May 16 00:25:57.909631 kubelet[2689]: I0516 00:25:57.909556 2689 state_mem.go:36] "Initialized new in-memory state store" May 16 00:25:57.909675 kubelet[2689]: I0516 00:25:57.909657 2689 kubelet.go:408] "Attempting to sync node with API server" May 16 00:25:57.909675 kubelet[2689]: I0516 00:25:57.909669 2689 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:25:57.909717 kubelet[2689]: I0516 00:25:57.909696 2689 kubelet.go:314] "Adding apiserver pod source" May 16 00:25:57.909717 kubelet[2689]: I0516 00:25:57.909712 2689 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:25:57.910525 kubelet[2689]: I0516 00:25:57.910440 2689 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 16 00:25:57.911380 kubelet[2689]: I0516 00:25:57.910989 2689 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:25:57.911566 kubelet[2689]: I0516 00:25:57.911540 2689 server.go:1274] "Started kubelet" May 16 00:25:57.912871 kubelet[2689]: I0516 00:25:57.912617 2689 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:25:57.913005 kubelet[2689]: I0516 00:25:57.912985 2689 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:25:57.913056 kubelet[2689]: I0516 00:25:57.913036 2689 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:25:57.913290 kubelet[2689]: I0516 00:25:57.913263 2689 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:25:57.914379 kubelet[2689]: I0516 00:25:57.914316 2689 server.go:449] "Adding debug handlers to kubelet server" May 16 00:25:57.917389 kubelet[2689]: I0516 00:25:57.916648 2689 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:25:57.918432 kubelet[2689]: I0516 00:25:57.917928 2689 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 00:25:57.918432 kubelet[2689]: I0516 00:25:57.918148 2689 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 00:25:57.918529 kubelet[2689]: I0516 00:25:57.918512 2689 reconciler.go:26] "Reconciler: start to sync state" May 16 00:25:57.930967 kubelet[2689]: I0516 00:25:57.930927 2689 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:25:57.932021 kubelet[2689]: E0516 00:25:57.917971 2689 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:25:57.932087 kubelet[2689]: E0516 00:25:57.931544 2689 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:25:57.933063 kubelet[2689]: I0516 00:25:57.933049 2689 factory.go:221] Registration of the containerd container factory successfully May 16 00:25:57.933157 kubelet[2689]: I0516 00:25:57.933148 2689 factory.go:221] Registration of the systemd container factory successfully May 16 00:25:57.940688 kubelet[2689]: I0516 00:25:57.940658 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:25:57.946259 kubelet[2689]: I0516 00:25:57.946220 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:25:57.946599 kubelet[2689]: I0516 00:25:57.946577 2689 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 00:25:57.946633 kubelet[2689]: I0516 00:25:57.946611 2689 kubelet.go:2321] "Starting kubelet main sync loop" May 16 00:25:57.946756 kubelet[2689]: E0516 00:25:57.946721 2689 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:25:57.980152 kubelet[2689]: I0516 00:25:57.979840 2689 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 00:25:57.980152 kubelet[2689]: I0516 00:25:57.979862 2689 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 00:25:57.980152 kubelet[2689]: I0516 00:25:57.979882 2689 state_mem.go:36] "Initialized new in-memory state store" May 16 00:25:57.980152 kubelet[2689]: I0516 00:25:57.980035 2689 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 00:25:57.980152 kubelet[2689]: I0516 00:25:57.980047 2689 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 00:25:57.980152 kubelet[2689]: I0516 00:25:57.980070 2689 policy_none.go:49] "None policy: Start" May 16 00:25:57.980675 kubelet[2689]: I0516 00:25:57.980636 2689 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 00:25:57.980738 kubelet[2689]: I0516 00:25:57.980729 2689 state_mem.go:35] "Initializing new in-memory state store" May 16 00:25:57.980923 kubelet[2689]: I0516 00:25:57.980913 2689 state_mem.go:75] "Updated machine memory state" May 16 00:25:57.984850 kubelet[2689]: I0516 00:25:57.984835 2689 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:25:57.985200 kubelet[2689]: I0516 00:25:57.985188 2689 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:25:57.985270 kubelet[2689]: I0516 00:25:57.985246 2689 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:25:57.985482 kubelet[2689]: I0516 00:25:57.985469 2689 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:25:58.053284 kubelet[2689]: E0516 00:25:58.053241 2689 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:25:58.091262 kubelet[2689]: I0516 00:25:58.091140 2689 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:25:58.099940 kubelet[2689]: I0516 00:25:58.099905 2689 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 16 00:25:58.100082 kubelet[2689]: I0516 00:25:58.099977 2689 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 00:25:58.104043 sudo[2724]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 00:25:58.104503 sudo[2724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 00:25:58.219417 kubelet[2689]: I0516 00:25:58.219365 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07bead2622bc50ce281897edc217bd5d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"07bead2622bc50ce281897edc217bd5d\") " pod="kube-system/kube-apiserver-localhost" May 16 00:25:58.219417 kubelet[2689]: I0516 00:25:58.219411 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07bead2622bc50ce281897edc217bd5d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"07bead2622bc50ce281897edc217bd5d\") " pod="kube-system/kube-apiserver-localhost" May 16 00:25:58.219417 kubelet[2689]: I0516 00:25:58.219428 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:25:58.219417 kubelet[2689]: I0516 00:25:58.219441 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:25:58.219690 kubelet[2689]: I0516 00:25:58.219458 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 00:25:58.219690 kubelet[2689]: I0516 00:25:58.219489 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07bead2622bc50ce281897edc217bd5d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"07bead2622bc50ce281897edc217bd5d\") " pod="kube-system/kube-apiserver-localhost" May 16 00:25:58.219690 kubelet[2689]: I0516 00:25:58.219510 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:25:58.219690 kubelet[2689]: I0516 00:25:58.219525 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:25:58.219690 kubelet[2689]: I0516 00:25:58.219540 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:25:58.352576 kubelet[2689]: E0516 00:25:58.352550 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:58.354058 kubelet[2689]: E0516 00:25:58.353998 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:58.354058 kubelet[2689]: E0516 00:25:58.353998 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:58.575476 sudo[2724]: pam_unix(sudo:session): session closed for user root May 16 00:25:58.910337 kubelet[2689]: I0516 00:25:58.910295 2689 apiserver.go:52] "Watching apiserver" May 16 00:25:58.918291 kubelet[2689]: I0516 00:25:58.918267 2689 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 00:25:58.976501 kubelet[2689]: E0516 00:25:58.976459 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:58.977204 kubelet[2689]: E0516 00:25:58.977169 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:58.977416 kubelet[2689]: E0516 00:25:58.977399 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:59.249738 kubelet[2689]: I0516 00:25:59.249594 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.249574165 podStartE2EDuration="1.249574165s" podCreationTimestamp="2025-05-16 00:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:25:59.21475604 +0000 UTC m=+1.370206897" watchObservedRunningTime="2025-05-16 00:25:59.249574165 +0000 UTC m=+1.405025022" May 16 00:25:59.258572 kubelet[2689]: I0516 00:25:59.258504 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.25848683 podStartE2EDuration="4.25848683s" podCreationTimestamp="2025-05-16 00:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:25:59.249769855 +0000 UTC m=+1.405220712" watchObservedRunningTime="2025-05-16 00:25:59.25848683 +0000 UTC m=+1.413937687" May 16 00:25:59.267023 kubelet[2689]: I0516 00:25:59.266978 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.266960834 podStartE2EDuration="1.266960834s" podCreationTimestamp="2025-05-16 00:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:25:59.25845518 +0000 UTC m=+1.413906037" watchObservedRunningTime="2025-05-16 00:25:59.266960834 +0000 UTC m=+1.422411692" May 16 00:25:59.978326 kubelet[2689]: E0516 00:25:59.978282 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:25:59.978759 kubelet[2689]: E0516 00:25:59.978541 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:00.131608 sudo[1706]: pam_unix(sudo:session): session closed for user root May 16 00:26:00.133022 sshd[1705]: Connection closed by 10.0.0.1 port 52084 May 16 00:26:00.133466 sshd-session[1702]: pam_unix(sshd:session): session closed for user core May 16 00:26:00.137636 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:52084.service: Deactivated successfully. May 16 00:26:00.139860 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:26:00.140097 systemd[1]: session-7.scope: Consumed 5.500s CPU time, 255.9M memory peak. May 16 00:26:00.141732 systemd-logind[1488]: Session 7 logged out. Waiting for processes to exit. May 16 00:26:00.142782 systemd-logind[1488]: Removed session 7. May 16 00:26:00.594875 kubelet[2689]: E0516 00:26:00.594824 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:02.199567 kubelet[2689]: I0516 00:26:02.199520 2689 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 00:26:02.200066 kubelet[2689]: I0516 00:26:02.199982 2689 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 00:26:02.200105 containerd[1500]: time="2025-05-16T00:26:02.199760303Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:26:03.121310 systemd[1]: Created slice kubepods-besteffort-pod59082cf4_4360_4ae8_93a1_fa7af9553b40.slice - libcontainer container kubepods-besteffort-pod59082cf4_4360_4ae8_93a1_fa7af9553b40.slice. May 16 00:26:03.147576 kubelet[2689]: I0516 00:26:03.147539 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-lib-modules\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.147755 kubelet[2689]: I0516 00:26:03.147584 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/59082cf4-4360-4ae8-93a1-fa7af9553b40-kube-proxy\") pod \"kube-proxy-5bczz\" (UID: \"59082cf4-4360-4ae8-93a1-fa7af9553b40\") " pod="kube-system/kube-proxy-5bczz" May 16 00:26:03.147755 kubelet[2689]: I0516 00:26:03.147612 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-cilium-cgroup\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.147864 kubelet[2689]: I0516 00:26:03.147841 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-cni-path\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.147904 kubelet[2689]: I0516 00:26:03.147877 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-host-proc-sys-net\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.147946 kubelet[2689]: I0516 00:26:03.147900 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlkfg\" (UniqueName: \"kubernetes.io/projected/ee92a339-c113-4289-aa60-1c4951386171-kube-api-access-wlkfg\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.147946 kubelet[2689]: I0516 00:26:03.147924 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-cilium-run\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.148020 kubelet[2689]: I0516 00:26:03.147947 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59082cf4-4360-4ae8-93a1-fa7af9553b40-lib-modules\") pod \"kube-proxy-5bczz\" (UID: \"59082cf4-4360-4ae8-93a1-fa7af9553b40\") " pod="kube-system/kube-proxy-5bczz" May 16 00:26:03.148020 kubelet[2689]: I0516 00:26:03.147966 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-host-proc-sys-kernel\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.148020 kubelet[2689]: I0516 00:26:03.147989 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee92a339-c113-4289-aa60-1c4951386171-hubble-tls\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.148152 kubelet[2689]: I0516 00:26:03.148025 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-hostproc\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.148152 kubelet[2689]: I0516 00:26:03.148055 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee92a339-c113-4289-aa60-1c4951386171-clustermesh-secrets\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.148152 kubelet[2689]: I0516 00:26:03.148074 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-etc-cni-netd\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.148152 kubelet[2689]: I0516 00:26:03.148098 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee92a339-c113-4289-aa60-1c4951386171-cilium-config-path\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.148152 kubelet[2689]: I0516 00:26:03.148125 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-bpf-maps\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.148152 kubelet[2689]: I0516 00:26:03.148149 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-xtables-lock\") pod \"cilium-gsffl\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " pod="kube-system/cilium-gsffl" May 16 00:26:03.148333 kubelet[2689]: I0516 00:26:03.148172 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59082cf4-4360-4ae8-93a1-fa7af9553b40-xtables-lock\") pod \"kube-proxy-5bczz\" (UID: \"59082cf4-4360-4ae8-93a1-fa7af9553b40\") " pod="kube-system/kube-proxy-5bczz" May 16 00:26:03.148333 kubelet[2689]: I0516 00:26:03.148197 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdhc4\" (UniqueName: \"kubernetes.io/projected/59082cf4-4360-4ae8-93a1-fa7af9553b40-kube-api-access-rdhc4\") pod \"kube-proxy-5bczz\" (UID: \"59082cf4-4360-4ae8-93a1-fa7af9553b40\") " pod="kube-system/kube-proxy-5bczz" May 16 00:26:03.153967 systemd[1]: Created slice kubepods-burstable-podee92a339_c113_4289_aa60_1c4951386171.slice - libcontainer container kubepods-burstable-podee92a339_c113_4289_aa60_1c4951386171.slice. May 16 00:26:03.185885 systemd[1]: Created slice kubepods-besteffort-pod0e1fb7ae_1a27_4b10_81ff_3d3b6289e6b6.slice - libcontainer container kubepods-besteffort-pod0e1fb7ae_1a27_4b10_81ff_3d3b6289e6b6.slice. May 16 00:26:03.249138 kubelet[2689]: I0516 00:26:03.249071 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6-cilium-config-path\") pod \"cilium-operator-5d85765b45-hmtng\" (UID: \"0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6\") " pod="kube-system/cilium-operator-5d85765b45-hmtng" May 16 00:26:03.251813 kubelet[2689]: I0516 00:26:03.250569 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjnzl\" (UniqueName: \"kubernetes.io/projected/0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6-kube-api-access-mjnzl\") pod \"cilium-operator-5d85765b45-hmtng\" (UID: \"0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6\") " pod="kube-system/cilium-operator-5d85765b45-hmtng" May 16 00:26:03.436130 kubelet[2689]: E0516 00:26:03.435957 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:03.436734 containerd[1500]: time="2025-05-16T00:26:03.436689845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5bczz,Uid:59082cf4-4360-4ae8-93a1-fa7af9553b40,Namespace:kube-system,Attempt:0,}" May 16 00:26:03.464455 kubelet[2689]: E0516 00:26:03.464412 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:03.464959 containerd[1500]: time="2025-05-16T00:26:03.464916786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gsffl,Uid:ee92a339-c113-4289-aa60-1c4951386171,Namespace:kube-system,Attempt:0,}" May 16 00:26:03.488945 kubelet[2689]: E0516 00:26:03.488924 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:03.489364 containerd[1500]: time="2025-05-16T00:26:03.489289052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hmtng,Uid:0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6,Namespace:kube-system,Attempt:0,}" May 16 00:26:03.547426 containerd[1500]: time="2025-05-16T00:26:03.547383091Z" level=info msg="connecting to shim 39d81b2393a5c8c6469eeedff2ca82cc103512b269fdd0d357dd2274d4fc9b92" address="unix:///run/containerd/s/791edef19ea0f753ce32fba1e11a1817e09787679fdadb2303e8908115167c9f" namespace=k8s.io protocol=ttrpc version=3 May 16 00:26:03.562478 containerd[1500]: time="2025-05-16T00:26:03.557448320Z" level=info msg="connecting to shim 73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57" address="unix:///run/containerd/s/a365c028236490240c2e20f7140e5e937d968462781cbbd94fac4a4ed4770cb3" namespace=k8s.io protocol=ttrpc version=3 May 16 00:26:03.562683 containerd[1500]: time="2025-05-16T00:26:03.562611195Z" level=info msg="connecting to shim 7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47" address="unix:///run/containerd/s/3924b0e189f0f98efff2674049c277941d76a2cc2ac37b7bdd0bf992864a2703" namespace=k8s.io protocol=ttrpc version=3 May 16 00:26:03.591484 systemd[1]: Started cri-containerd-39d81b2393a5c8c6469eeedff2ca82cc103512b269fdd0d357dd2274d4fc9b92.scope - libcontainer container 39d81b2393a5c8c6469eeedff2ca82cc103512b269fdd0d357dd2274d4fc9b92. May 16 00:26:03.595906 systemd[1]: Started cri-containerd-73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57.scope - libcontainer container 73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57. May 16 00:26:03.597782 systemd[1]: Started cri-containerd-7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47.scope - libcontainer container 7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47. May 16 00:26:03.646885 containerd[1500]: time="2025-05-16T00:26:03.646836681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gsffl,Uid:ee92a339-c113-4289-aa60-1c4951386171,Namespace:kube-system,Attempt:0,} returns sandbox id \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\"" May 16 00:26:03.647814 kubelet[2689]: E0516 00:26:03.647784 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:03.648787 containerd[1500]: time="2025-05-16T00:26:03.648751976Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 00:26:03.651838 containerd[1500]: time="2025-05-16T00:26:03.651611625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5bczz,Uid:59082cf4-4360-4ae8-93a1-fa7af9553b40,Namespace:kube-system,Attempt:0,} returns sandbox id \"39d81b2393a5c8c6469eeedff2ca82cc103512b269fdd0d357dd2274d4fc9b92\"" May 16 00:26:03.655154 kubelet[2689]: E0516 00:26:03.655119 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:03.656104 containerd[1500]: time="2025-05-16T00:26:03.655901623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hmtng,Uid:0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47\"" May 16 00:26:03.656937 containerd[1500]: time="2025-05-16T00:26:03.656891099Z" level=info msg="CreateContainer within sandbox \"39d81b2393a5c8c6469eeedff2ca82cc103512b269fdd0d357dd2274d4fc9b92\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:26:03.658289 kubelet[2689]: E0516 00:26:03.658234 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:03.668818 containerd[1500]: time="2025-05-16T00:26:03.668768049Z" level=info msg="Container 2b83a2e80410f3260fd0457b0deeffbdfbe97550525ba74a95de16b3491f3b4a: CDI devices from CRI Config.CDIDevices: []" May 16 00:26:03.677285 containerd[1500]: time="2025-05-16T00:26:03.677220624Z" level=info msg="CreateContainer within sandbox \"39d81b2393a5c8c6469eeedff2ca82cc103512b269fdd0d357dd2274d4fc9b92\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2b83a2e80410f3260fd0457b0deeffbdfbe97550525ba74a95de16b3491f3b4a\"" May 16 00:26:03.677946 containerd[1500]: time="2025-05-16T00:26:03.677901970Z" level=info msg="StartContainer for \"2b83a2e80410f3260fd0457b0deeffbdfbe97550525ba74a95de16b3491f3b4a\"" May 16 00:26:03.681377 containerd[1500]: time="2025-05-16T00:26:03.679557476Z" level=info msg="connecting to shim 2b83a2e80410f3260fd0457b0deeffbdfbe97550525ba74a95de16b3491f3b4a" address="unix:///run/containerd/s/791edef19ea0f753ce32fba1e11a1817e09787679fdadb2303e8908115167c9f" protocol=ttrpc version=3 May 16 00:26:03.704573 systemd[1]: Started cri-containerd-2b83a2e80410f3260fd0457b0deeffbdfbe97550525ba74a95de16b3491f3b4a.scope - libcontainer container 2b83a2e80410f3260fd0457b0deeffbdfbe97550525ba74a95de16b3491f3b4a. May 16 00:26:03.747521 containerd[1500]: time="2025-05-16T00:26:03.747479004Z" level=info msg="StartContainer for \"2b83a2e80410f3260fd0457b0deeffbdfbe97550525ba74a95de16b3491f3b4a\" returns successfully" May 16 00:26:03.990316 kubelet[2689]: E0516 00:26:03.990099 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:08.105256 kubelet[2689]: E0516 00:26:08.105217 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:08.221514 kubelet[2689]: I0516 00:26:08.221414 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5bczz" podStartSLOduration=5.221394384 podStartE2EDuration="5.221394384s" podCreationTimestamp="2025-05-16 00:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:26:04.000060912 +0000 UTC m=+6.155511779" watchObservedRunningTime="2025-05-16 00:26:08.221394384 +0000 UTC m=+10.376845241" May 16 00:26:08.999827 kubelet[2689]: E0516 00:26:08.999786 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:09.209053 kubelet[2689]: E0516 00:26:09.208778 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:10.001106 kubelet[2689]: E0516 00:26:10.001076 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:10.604752 kubelet[2689]: E0516 00:26:10.604578 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:12.114139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678195685.mount: Deactivated successfully. May 16 00:26:16.128511 containerd[1500]: time="2025-05-16T00:26:16.128452212Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:26:16.130958 containerd[1500]: time="2025-05-16T00:26:16.130912790Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 16 00:26:16.132644 containerd[1500]: time="2025-05-16T00:26:16.132616434Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:26:16.134241 containerd[1500]: time="2025-05-16T00:26:16.134188722Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.485397s" May 16 00:26:16.134241 containerd[1500]: time="2025-05-16T00:26:16.134218998Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 16 00:26:16.142439 containerd[1500]: time="2025-05-16T00:26:16.142391399Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 00:26:16.157419 containerd[1500]: time="2025-05-16T00:26:16.157369922Z" level=info msg="CreateContainer within sandbox \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:26:16.168054 containerd[1500]: time="2025-05-16T00:26:16.167984817Z" level=info msg="Container f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48: CDI devices from CRI Config.CDIDevices: []" May 16 00:26:16.171851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount8342482.mount: Deactivated successfully. May 16 00:26:16.182723 containerd[1500]: time="2025-05-16T00:26:16.182646162Z" level=info msg="CreateContainer within sandbox \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48\"" May 16 00:26:16.188369 containerd[1500]: time="2025-05-16T00:26:16.187106341Z" level=info msg="StartContainer for \"f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48\"" May 16 00:26:16.188369 containerd[1500]: time="2025-05-16T00:26:16.188094941Z" level=info msg="connecting to shim f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48" address="unix:///run/containerd/s/a365c028236490240c2e20f7140e5e937d968462781cbbd94fac4a4ed4770cb3" protocol=ttrpc version=3 May 16 00:26:16.210680 systemd[1]: Started cri-containerd-f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48.scope - libcontainer container f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48. May 16 00:26:16.246731 containerd[1500]: time="2025-05-16T00:26:16.246669030Z" level=info msg="StartContainer for \"f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48\" returns successfully" May 16 00:26:16.261563 systemd[1]: cri-containerd-f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48.scope: Deactivated successfully. May 16 00:26:16.264440 containerd[1500]: time="2025-05-16T00:26:16.264397335Z" level=info msg="received exit event container_id:\"f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48\" id:\"f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48\" pid:3107 exited_at:{seconds:1747355176 nanos:263968097}" May 16 00:26:16.264744 containerd[1500]: time="2025-05-16T00:26:16.264705044Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48\" id:\"f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48\" pid:3107 exited_at:{seconds:1747355176 nanos:263968097}" May 16 00:26:16.287799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48-rootfs.mount: Deactivated successfully. May 16 00:26:17.515553 kubelet[2689]: E0516 00:26:17.515517 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:18.517948 kubelet[2689]: E0516 00:26:18.517915 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:18.519307 containerd[1500]: time="2025-05-16T00:26:18.519272832Z" level=info msg="CreateContainer within sandbox \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:26:18.827228 containerd[1500]: time="2025-05-16T00:26:18.827096833Z" level=info msg="Container 369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d: CDI devices from CRI Config.CDIDevices: []" May 16 00:26:18.833667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2833532206.mount: Deactivated successfully. May 16 00:26:19.691711 containerd[1500]: time="2025-05-16T00:26:19.691662975Z" level=info msg="CreateContainer within sandbox \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d\"" May 16 00:26:19.692109 containerd[1500]: time="2025-05-16T00:26:19.692023573Z" level=info msg="StartContainer for \"369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d\"" May 16 00:26:19.692849 containerd[1500]: time="2025-05-16T00:26:19.692825781Z" level=info msg="connecting to shim 369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d" address="unix:///run/containerd/s/a365c028236490240c2e20f7140e5e937d968462781cbbd94fac4a4ed4770cb3" protocol=ttrpc version=3 May 16 00:26:19.714683 systemd[1]: Started cri-containerd-369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d.scope - libcontainer container 369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d. May 16 00:26:19.755994 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:26:19.756265 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:26:19.756476 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 00:26:19.758132 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:26:19.760636 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:26:19.761146 systemd[1]: cri-containerd-369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d.scope: Deactivated successfully. May 16 00:26:19.761299 containerd[1500]: time="2025-05-16T00:26:19.761130921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d\" id:\"369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d\" pid:3150 exited_at:{seconds:1747355179 nanos:760810970}" May 16 00:26:19.857033 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:26:19.969911 containerd[1500]: time="2025-05-16T00:26:19.969788187Z" level=info msg="received exit event container_id:\"369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d\" id:\"369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d\" pid:3150 exited_at:{seconds:1747355179 nanos:760810970}" May 16 00:26:19.971038 containerd[1500]: time="2025-05-16T00:26:19.971004844Z" level=info msg="StartContainer for \"369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d\" returns successfully" May 16 00:26:19.989697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d-rootfs.mount: Deactivated successfully. May 16 00:26:20.564419 kubelet[2689]: E0516 00:26:20.564379 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:20.565976 containerd[1500]: time="2025-05-16T00:26:20.565936628Z" level=info msg="CreateContainer within sandbox \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:26:20.718562 containerd[1500]: time="2025-05-16T00:26:20.718498035Z" level=info msg="Container c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f: CDI devices from CRI Config.CDIDevices: []" May 16 00:26:20.723439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890900881.mount: Deactivated successfully. May 16 00:26:20.730384 containerd[1500]: time="2025-05-16T00:26:20.730325538Z" level=info msg="CreateContainer within sandbox \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f\"" May 16 00:26:20.730984 containerd[1500]: time="2025-05-16T00:26:20.730847119Z" level=info msg="StartContainer for \"c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f\"" May 16 00:26:20.732259 containerd[1500]: time="2025-05-16T00:26:20.732236700Z" level=info msg="connecting to shim c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f" address="unix:///run/containerd/s/a365c028236490240c2e20f7140e5e937d968462781cbbd94fac4a4ed4770cb3" protocol=ttrpc version=3 May 16 00:26:20.764572 systemd[1]: Started cri-containerd-c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f.scope - libcontainer container c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f. May 16 00:26:20.807087 systemd[1]: cri-containerd-c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f.scope: Deactivated successfully. May 16 00:26:20.808560 containerd[1500]: time="2025-05-16T00:26:20.808520290Z" level=info msg="StartContainer for \"c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f\" returns successfully" May 16 00:26:20.808634 containerd[1500]: time="2025-05-16T00:26:20.808601282Z" level=info msg="received exit event container_id:\"c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f\" id:\"c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f\" pid:3197 exited_at:{seconds:1747355180 nanos:808422716}" May 16 00:26:20.808697 containerd[1500]: time="2025-05-16T00:26:20.808673838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f\" id:\"c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f\" pid:3197 exited_at:{seconds:1747355180 nanos:808422716}" May 16 00:26:20.832631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f-rootfs.mount: Deactivated successfully. May 16 00:26:21.135683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2349754286.mount: Deactivated successfully. May 16 00:26:21.573664 kubelet[2689]: E0516 00:26:21.573553 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:21.575020 containerd[1500]: time="2025-05-16T00:26:21.574960444Z" level=info msg="CreateContainer within sandbox \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:26:22.135372 containerd[1500]: time="2025-05-16T00:26:22.132975488Z" level=info msg="Container 8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e: CDI devices from CRI Config.CDIDevices: []" May 16 00:26:22.135967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2674865028.mount: Deactivated successfully. May 16 00:26:22.153574 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:34556.service - OpenSSH per-connection server daemon (10.0.0.1:34556). May 16 00:26:22.232016 sshd[3241]: Accepted publickey for core from 10.0.0.1 port 34556 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:26:22.233629 sshd-session[3241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:26:22.237969 systemd-logind[1488]: New session 8 of user core. May 16 00:26:22.246497 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 00:26:22.388902 sshd[3243]: Connection closed by 10.0.0.1 port 34556 May 16 00:26:22.389105 sshd-session[3241]: pam_unix(sshd:session): session closed for user core May 16 00:26:22.392657 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:34556.service: Deactivated successfully. May 16 00:26:22.394544 systemd[1]: session-8.scope: Deactivated successfully. May 16 00:26:22.395171 systemd-logind[1488]: Session 8 logged out. Waiting for processes to exit. May 16 00:26:22.396033 systemd-logind[1488]: Removed session 8. May 16 00:26:22.741052 containerd[1500]: time="2025-05-16T00:26:22.740930521Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:26:22.741481 containerd[1500]: time="2025-05-16T00:26:22.741444316Z" level=info msg="CreateContainer within sandbox \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e\"" May 16 00:26:22.741985 containerd[1500]: time="2025-05-16T00:26:22.741952532Z" level=info msg="StartContainer for \"8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e\"" May 16 00:26:22.742795 containerd[1500]: time="2025-05-16T00:26:22.742735743Z" level=info msg="connecting to shim 8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e" address="unix:///run/containerd/s/a365c028236490240c2e20f7140e5e937d968462781cbbd94fac4a4ed4770cb3" protocol=ttrpc version=3 May 16 00:26:22.742940 containerd[1500]: time="2025-05-16T00:26:22.742743788Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 16 00:26:22.745642 containerd[1500]: time="2025-05-16T00:26:22.745601167Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:26:22.747166 containerd[1500]: time="2025-05-16T00:26:22.746762992Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.604315255s" May 16 00:26:22.747166 containerd[1500]: time="2025-05-16T00:26:22.746803087Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 16 00:26:22.751232 containerd[1500]: time="2025-05-16T00:26:22.750655987Z" level=info msg="CreateContainer within sandbox \"7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 00:26:22.765380 containerd[1500]: time="2025-05-16T00:26:22.765072068Z" level=info msg="Container 41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68: CDI devices from CRI Config.CDIDevices: []" May 16 00:26:22.768602 systemd[1]: Started cri-containerd-8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e.scope - libcontainer container 8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e. May 16 00:26:22.777146 containerd[1500]: time="2025-05-16T00:26:22.777095252Z" level=info msg="CreateContainer within sandbox \"7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68\"" May 16 00:26:22.777893 containerd[1500]: time="2025-05-16T00:26:22.777860499Z" level=info msg="StartContainer for \"41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68\"" May 16 00:26:22.778716 containerd[1500]: time="2025-05-16T00:26:22.778691701Z" level=info msg="connecting to shim 41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68" address="unix:///run/containerd/s/3924b0e189f0f98efff2674049c277941d76a2cc2ac37b7bdd0bf992864a2703" protocol=ttrpc version=3 May 16 00:26:22.804590 systemd[1]: Started cri-containerd-41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68.scope - libcontainer container 41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68. May 16 00:26:22.804897 systemd[1]: cri-containerd-8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e.scope: Deactivated successfully. May 16 00:26:22.805210 containerd[1500]: time="2025-05-16T00:26:22.805168998Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e\" id:\"8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e\" pid:3288 exited_at:{seconds:1747355182 nanos:804579330}" May 16 00:26:22.809026 containerd[1500]: time="2025-05-16T00:26:22.808982023Z" level=info msg="received exit event container_id:\"8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e\" id:\"8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e\" pid:3288 exited_at:{seconds:1747355182 nanos:804579330}" May 16 00:26:22.811002 containerd[1500]: time="2025-05-16T00:26:22.810943680Z" level=info msg="StartContainer for \"8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e\" returns successfully" May 16 00:26:22.848766 containerd[1500]: time="2025-05-16T00:26:22.847884419Z" level=info msg="StartContainer for \"41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68\" returns successfully" May 16 00:26:23.134321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e-rootfs.mount: Deactivated successfully. May 16 00:26:23.638142 kubelet[2689]: E0516 00:26:23.638107 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:23.641144 kubelet[2689]: E0516 00:26:23.640902 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:23.643017 containerd[1500]: time="2025-05-16T00:26:23.642962885Z" level=info msg="CreateContainer within sandbox \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:26:23.665726 containerd[1500]: time="2025-05-16T00:26:23.665673798Z" level=info msg="Container c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76: CDI devices from CRI Config.CDIDevices: []" May 16 00:26:23.669450 kubelet[2689]: I0516 00:26:23.669256 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-hmtng" podStartSLOduration=1.5787591 podStartE2EDuration="20.669238546s" podCreationTimestamp="2025-05-16 00:26:03 +0000 UTC" firstStartedPulling="2025-05-16 00:26:03.658764076 +0000 UTC m=+5.814214933" lastFinishedPulling="2025-05-16 00:26:22.749243522 +0000 UTC m=+24.904694379" observedRunningTime="2025-05-16 00:26:23.652893354 +0000 UTC m=+25.808344222" watchObservedRunningTime="2025-05-16 00:26:23.669238546 +0000 UTC m=+25.824689403" May 16 00:26:23.670931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3843710771.mount: Deactivated successfully. May 16 00:26:23.746485 containerd[1500]: time="2025-05-16T00:26:23.746434777Z" level=info msg="CreateContainer within sandbox \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\"" May 16 00:26:23.747388 containerd[1500]: time="2025-05-16T00:26:23.746953792Z" level=info msg="StartContainer for \"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\"" May 16 00:26:23.747863 containerd[1500]: time="2025-05-16T00:26:23.747831191Z" level=info msg="connecting to shim c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76" address="unix:///run/containerd/s/a365c028236490240c2e20f7140e5e937d968462781cbbd94fac4a4ed4770cb3" protocol=ttrpc version=3 May 16 00:26:23.769517 systemd[1]: Started cri-containerd-c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76.scope - libcontainer container c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76. May 16 00:26:23.851468 containerd[1500]: time="2025-05-16T00:26:23.851299466Z" level=info msg="StartContainer for \"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\" returns successfully" May 16 00:26:23.949719 containerd[1500]: time="2025-05-16T00:26:23.949565236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\" id:\"4410ac6ec6fc2983a9f027642d83b1ae298cbb10659f13e0591158612a3a10b3\" pid:3394 exited_at:{seconds:1747355183 nanos:949234014}" May 16 00:26:24.027954 kubelet[2689]: I0516 00:26:24.027913 2689 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 00:26:24.083137 systemd[1]: Created slice kubepods-burstable-podaee7ae47_ab28_409f_971b_f3ad450448c9.slice - libcontainer container kubepods-burstable-podaee7ae47_ab28_409f_971b_f3ad450448c9.slice. May 16 00:26:24.095277 systemd[1]: Created slice kubepods-burstable-pod19e78b71_66ea_4cf5_8e75_3af3cce2b2ae.slice - libcontainer container kubepods-burstable-pod19e78b71_66ea_4cf5_8e75_3af3cce2b2ae.slice. May 16 00:26:24.153831 kubelet[2689]: I0516 00:26:24.153779 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aee7ae47-ab28-409f-971b-f3ad450448c9-config-volume\") pod \"coredns-7c65d6cfc9-s6gzb\" (UID: \"aee7ae47-ab28-409f-971b-f3ad450448c9\") " pod="kube-system/coredns-7c65d6cfc9-s6gzb" May 16 00:26:24.153831 kubelet[2689]: I0516 00:26:24.153820 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz4gm\" (UniqueName: \"kubernetes.io/projected/aee7ae47-ab28-409f-971b-f3ad450448c9-kube-api-access-kz4gm\") pod \"coredns-7c65d6cfc9-s6gzb\" (UID: \"aee7ae47-ab28-409f-971b-f3ad450448c9\") " pod="kube-system/coredns-7c65d6cfc9-s6gzb" May 16 00:26:24.153831 kubelet[2689]: I0516 00:26:24.153844 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e78b71-66ea-4cf5-8e75-3af3cce2b2ae-config-volume\") pod \"coredns-7c65d6cfc9-c4c4q\" (UID: \"19e78b71-66ea-4cf5-8e75-3af3cce2b2ae\") " pod="kube-system/coredns-7c65d6cfc9-c4c4q" May 16 00:26:24.154089 kubelet[2689]: I0516 00:26:24.153865 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwzkq\" (UniqueName: \"kubernetes.io/projected/19e78b71-66ea-4cf5-8e75-3af3cce2b2ae-kube-api-access-qwzkq\") pod \"coredns-7c65d6cfc9-c4c4q\" (UID: \"19e78b71-66ea-4cf5-8e75-3af3cce2b2ae\") " pod="kube-system/coredns-7c65d6cfc9-c4c4q" May 16 00:26:24.392089 kubelet[2689]: E0516 00:26:24.392047 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:24.392827 containerd[1500]: time="2025-05-16T00:26:24.392785402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-s6gzb,Uid:aee7ae47-ab28-409f-971b-f3ad450448c9,Namespace:kube-system,Attempt:0,}" May 16 00:26:24.398668 kubelet[2689]: E0516 00:26:24.398639 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:24.399008 containerd[1500]: time="2025-05-16T00:26:24.398972566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c4c4q,Uid:19e78b71-66ea-4cf5-8e75-3af3cce2b2ae,Namespace:kube-system,Attempt:0,}" May 16 00:26:24.681083 kubelet[2689]: E0516 00:26:24.680987 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:24.681083 kubelet[2689]: E0516 00:26:24.681026 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:24.797956 kubelet[2689]: I0516 00:26:24.797884 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gsffl" podStartSLOduration=9.305731926 podStartE2EDuration="21.797865389s" podCreationTimestamp="2025-05-16 00:26:03 +0000 UTC" firstStartedPulling="2025-05-16 00:26:03.64838838 +0000 UTC m=+5.803839237" lastFinishedPulling="2025-05-16 00:26:16.140521843 +0000 UTC m=+18.295972700" observedRunningTime="2025-05-16 00:26:24.797411917 +0000 UTC m=+26.952862784" watchObservedRunningTime="2025-05-16 00:26:24.797865389 +0000 UTC m=+26.953316246" May 16 00:26:25.682629 kubelet[2689]: E0516 00:26:25.682594 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:26.684439 kubelet[2689]: E0516 00:26:26.684389 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:26.921573 systemd-networkd[1417]: cilium_host: Link UP May 16 00:26:26.921750 systemd-networkd[1417]: cilium_net: Link UP May 16 00:26:26.921755 systemd-networkd[1417]: cilium_net: Gained carrier May 16 00:26:26.921966 systemd-networkd[1417]: cilium_host: Gained carrier May 16 00:26:27.027721 systemd-networkd[1417]: cilium_vxlan: Link UP May 16 00:26:27.027733 systemd-networkd[1417]: cilium_vxlan: Gained carrier May 16 00:26:27.095497 systemd-networkd[1417]: cilium_host: Gained IPv6LL May 16 00:26:27.215440 systemd-networkd[1417]: cilium_net: Gained IPv6LL May 16 00:26:27.250389 kernel: NET: Registered PF_ALG protocol family May 16 00:26:27.410897 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:34568.service - OpenSSH per-connection server daemon (10.0.0.1:34568). May 16 00:26:27.465982 sshd[3596]: Accepted publickey for core from 10.0.0.1 port 34568 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:26:27.466651 sshd-session[3596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:26:27.471585 systemd-logind[1488]: New session 9 of user core. May 16 00:26:27.474484 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 00:26:27.598495 sshd[3609]: Connection closed by 10.0.0.1 port 34568 May 16 00:26:27.600199 sshd-session[3596]: pam_unix(sshd:session): session closed for user core May 16 00:26:27.603970 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:34568.service: Deactivated successfully. May 16 00:26:27.606389 systemd[1]: session-9.scope: Deactivated successfully. May 16 00:26:27.607197 systemd-logind[1488]: Session 9 logged out. Waiting for processes to exit. May 16 00:26:27.608268 systemd-logind[1488]: Removed session 9. May 16 00:26:27.950523 systemd-networkd[1417]: lxc_health: Link UP May 16 00:26:27.962668 systemd-networkd[1417]: lxc_health: Gained carrier May 16 00:26:28.443386 kernel: eth0: renamed from tmp2c9c2 May 16 00:26:28.454523 systemd-networkd[1417]: lxca11f1523a900: Link UP May 16 00:26:28.455540 systemd-networkd[1417]: lxca11f1523a900: Gained carrier May 16 00:26:28.483394 kernel: eth0: renamed from tmpc0777 May 16 00:26:28.495053 systemd-networkd[1417]: lxcef9909308de9: Link UP May 16 00:26:28.495987 systemd-networkd[1417]: lxcef9909308de9: Gained carrier May 16 00:26:28.815452 systemd-networkd[1417]: cilium_vxlan: Gained IPv6LL May 16 00:26:29.265673 systemd-networkd[1417]: lxc_health: Gained IPv6LL May 16 00:26:29.465976 kubelet[2689]: E0516 00:26:29.465936 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:29.689728 kubelet[2689]: E0516 00:26:29.689684 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:30.159530 systemd-networkd[1417]: lxca11f1523a900: Gained IPv6LL May 16 00:26:30.351563 systemd-networkd[1417]: lxcef9909308de9: Gained IPv6LL May 16 00:26:30.691383 kubelet[2689]: E0516 00:26:30.691323 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:32.083929 containerd[1500]: time="2025-05-16T00:26:32.083639706Z" level=info msg="connecting to shim c07770f9c8114566f47b8d77a973f119d613591fb2ee653c31ac7ba5081ee721" address="unix:///run/containerd/s/c52b560688e2f99c2b76520b5c751ead5e1ca10c1a9df77b54b2402de41a036c" namespace=k8s.io protocol=ttrpc version=3 May 16 00:26:32.085433 containerd[1500]: time="2025-05-16T00:26:32.085396164Z" level=info msg="connecting to shim 2c9c2ad3ae88d41e520bfa3fa677036bbf20adcd9050ed677d66ffe694fd5d6d" address="unix:///run/containerd/s/37eed9f0b7ce03d0b9cc0a557198baf760a7cf63a18fd85d3af1b5ea63f2380e" namespace=k8s.io protocol=ttrpc version=3 May 16 00:26:32.115611 systemd[1]: Started cri-containerd-c07770f9c8114566f47b8d77a973f119d613591fb2ee653c31ac7ba5081ee721.scope - libcontainer container c07770f9c8114566f47b8d77a973f119d613591fb2ee653c31ac7ba5081ee721. May 16 00:26:32.118627 systemd[1]: Started cri-containerd-2c9c2ad3ae88d41e520bfa3fa677036bbf20adcd9050ed677d66ffe694fd5d6d.scope - libcontainer container 2c9c2ad3ae88d41e520bfa3fa677036bbf20adcd9050ed677d66ffe694fd5d6d. May 16 00:26:32.133881 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:26:32.135340 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:26:32.169787 containerd[1500]: time="2025-05-16T00:26:32.169748279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-s6gzb,Uid:aee7ae47-ab28-409f-971b-f3ad450448c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c07770f9c8114566f47b8d77a973f119d613591fb2ee653c31ac7ba5081ee721\"" May 16 00:26:32.170466 kubelet[2689]: E0516 00:26:32.170439 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:32.171592 containerd[1500]: time="2025-05-16T00:26:32.171564679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c4c4q,Uid:19e78b71-66ea-4cf5-8e75-3af3cce2b2ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c9c2ad3ae88d41e520bfa3fa677036bbf20adcd9050ed677d66ffe694fd5d6d\"" May 16 00:26:32.171978 containerd[1500]: time="2025-05-16T00:26:32.171950724Z" level=info msg="CreateContainer within sandbox \"c07770f9c8114566f47b8d77a973f119d613591fb2ee653c31ac7ba5081ee721\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:26:32.172634 kubelet[2689]: E0516 00:26:32.172595 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:32.174113 containerd[1500]: time="2025-05-16T00:26:32.174087416Z" level=info msg="CreateContainer within sandbox \"2c9c2ad3ae88d41e520bfa3fa677036bbf20adcd9050ed677d66ffe694fd5d6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:26:32.190644 containerd[1500]: time="2025-05-16T00:26:32.190592925Z" level=info msg="Container af051d00e10058edf27100f93b8f0f0a126dad87af0a712809f775d5e1de8762: CDI devices from CRI Config.CDIDevices: []" May 16 00:26:32.194120 containerd[1500]: time="2025-05-16T00:26:32.194095102Z" level=info msg="Container 1fb9ec761d0db2fee1ef9363ab4eaac91789b6c7ebb86d4b45a986287ebee663: CDI devices from CRI Config.CDIDevices: []" May 16 00:26:32.200805 containerd[1500]: time="2025-05-16T00:26:32.200783108Z" level=info msg="CreateContainer within sandbox \"2c9c2ad3ae88d41e520bfa3fa677036bbf20adcd9050ed677d66ffe694fd5d6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1fb9ec761d0db2fee1ef9363ab4eaac91789b6c7ebb86d4b45a986287ebee663\"" May 16 00:26:32.201315 containerd[1500]: time="2025-05-16T00:26:32.201291333Z" level=info msg="StartContainer for \"1fb9ec761d0db2fee1ef9363ab4eaac91789b6c7ebb86d4b45a986287ebee663\"" May 16 00:26:32.202365 containerd[1500]: time="2025-05-16T00:26:32.202316628Z" level=info msg="connecting to shim 1fb9ec761d0db2fee1ef9363ab4eaac91789b6c7ebb86d4b45a986287ebee663" address="unix:///run/containerd/s/37eed9f0b7ce03d0b9cc0a557198baf760a7cf63a18fd85d3af1b5ea63f2380e" protocol=ttrpc version=3 May 16 00:26:32.215771 containerd[1500]: time="2025-05-16T00:26:32.215721916Z" level=info msg="CreateContainer within sandbox \"c07770f9c8114566f47b8d77a973f119d613591fb2ee653c31ac7ba5081ee721\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"af051d00e10058edf27100f93b8f0f0a126dad87af0a712809f775d5e1de8762\"" May 16 00:26:32.216384 containerd[1500]: time="2025-05-16T00:26:32.216335048Z" level=info msg="StartContainer for \"af051d00e10058edf27100f93b8f0f0a126dad87af0a712809f775d5e1de8762\"" May 16 00:26:32.217059 containerd[1500]: time="2025-05-16T00:26:32.217038769Z" level=info msg="connecting to shim af051d00e10058edf27100f93b8f0f0a126dad87af0a712809f775d5e1de8762" address="unix:///run/containerd/s/c52b560688e2f99c2b76520b5c751ead5e1ca10c1a9df77b54b2402de41a036c" protocol=ttrpc version=3 May 16 00:26:32.224041 systemd[1]: Started cri-containerd-1fb9ec761d0db2fee1ef9363ab4eaac91789b6c7ebb86d4b45a986287ebee663.scope - libcontainer container 1fb9ec761d0db2fee1ef9363ab4eaac91789b6c7ebb86d4b45a986287ebee663. May 16 00:26:32.241468 systemd[1]: Started cri-containerd-af051d00e10058edf27100f93b8f0f0a126dad87af0a712809f775d5e1de8762.scope - libcontainer container af051d00e10058edf27100f93b8f0f0a126dad87af0a712809f775d5e1de8762. May 16 00:26:32.267524 containerd[1500]: time="2025-05-16T00:26:32.267490195Z" level=info msg="StartContainer for \"1fb9ec761d0db2fee1ef9363ab4eaac91789b6c7ebb86d4b45a986287ebee663\" returns successfully" May 16 00:26:32.283394 containerd[1500]: time="2025-05-16T00:26:32.283336416Z" level=info msg="StartContainer for \"af051d00e10058edf27100f93b8f0f0a126dad87af0a712809f775d5e1de8762\" returns successfully" May 16 00:26:32.618489 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:45020.service - OpenSSH per-connection server daemon (10.0.0.1:45020). May 16 00:26:32.674396 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 45020 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:26:32.676586 sshd-session[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:26:32.681731 systemd-logind[1488]: New session 10 of user core. May 16 00:26:32.693598 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 00:26:32.698080 kubelet[2689]: E0516 00:26:32.697656 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:32.700954 kubelet[2689]: E0516 00:26:32.700929 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:32.761973 kubelet[2689]: I0516 00:26:32.761898 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-c4c4q" podStartSLOduration=29.761882458 podStartE2EDuration="29.761882458s" podCreationTimestamp="2025-05-16 00:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:26:32.761437603 +0000 UTC m=+34.916888460" watchObservedRunningTime="2025-05-16 00:26:32.761882458 +0000 UTC m=+34.917333315" May 16 00:26:32.904901 sshd[4042]: Connection closed by 10.0.0.1 port 45020 May 16 00:26:32.905278 sshd-session[4040]: pam_unix(sshd:session): session closed for user core May 16 00:26:32.909704 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:45020.service: Deactivated successfully. May 16 00:26:32.911736 systemd[1]: session-10.scope: Deactivated successfully. May 16 00:26:32.912570 systemd-logind[1488]: Session 10 logged out. Waiting for processes to exit. May 16 00:26:32.913649 systemd-logind[1488]: Removed session 10. May 16 00:26:33.054709 kubelet[2689]: I0516 00:26:33.053723 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-s6gzb" podStartSLOduration=30.053689192 podStartE2EDuration="30.053689192s" podCreationTimestamp="2025-05-16 00:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:26:33.053363522 +0000 UTC m=+35.208814379" watchObservedRunningTime="2025-05-16 00:26:33.053689192 +0000 UTC m=+35.209140049" May 16 00:26:33.709845 kubelet[2689]: E0516 00:26:33.709802 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:33.710297 kubelet[2689]: E0516 00:26:33.709857 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:34.711869 kubelet[2689]: E0516 00:26:34.711833 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:34.712319 kubelet[2689]: E0516 00:26:34.712013 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:26:37.918385 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:45024.service - OpenSSH per-connection server daemon (10.0.0.1:45024). May 16 00:26:37.961195 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 45024 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:26:37.963125 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:26:37.967836 systemd-logind[1488]: New session 11 of user core. May 16 00:26:37.981495 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 00:26:38.093039 sshd[4070]: Connection closed by 10.0.0.1 port 45024 May 16 00:26:38.093450 sshd-session[4068]: pam_unix(sshd:session): session closed for user core May 16 00:26:38.098077 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:45024.service: Deactivated successfully. May 16 00:26:38.100208 systemd[1]: session-11.scope: Deactivated successfully. May 16 00:26:38.100908 systemd-logind[1488]: Session 11 logged out. Waiting for processes to exit. May 16 00:26:38.101795 systemd-logind[1488]: Removed session 11. May 16 00:26:43.112090 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:41518.service - OpenSSH per-connection server daemon (10.0.0.1:41518). May 16 00:26:43.162293 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 41518 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:26:43.163756 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:26:43.167888 systemd-logind[1488]: New session 12 of user core. May 16 00:26:43.180482 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 00:26:43.284266 sshd[4089]: Connection closed by 10.0.0.1 port 41518 May 16 00:26:43.284731 sshd-session[4087]: pam_unix(sshd:session): session closed for user core May 16 00:26:43.301111 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:41518.service: Deactivated successfully. May 16 00:26:43.302990 systemd[1]: session-12.scope: Deactivated successfully. May 16 00:26:43.304602 systemd-logind[1488]: Session 12 logged out. Waiting for processes to exit. May 16 00:26:43.306105 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:41532.service - OpenSSH per-connection server daemon (10.0.0.1:41532). May 16 00:26:43.307428 systemd-logind[1488]: Removed session 12. May 16 00:26:43.354972 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 41532 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:26:43.356403 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:26:43.360549 systemd-logind[1488]: New session 13 of user core. May 16 00:26:43.369478 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 00:26:43.517252 sshd[4106]: Connection closed by 10.0.0.1 port 41532 May 16 00:26:43.517760 sshd-session[4103]: pam_unix(sshd:session): session closed for user core May 16 00:26:43.529775 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:41532.service: Deactivated successfully. May 16 00:26:43.532374 systemd[1]: session-13.scope: Deactivated successfully. May 16 00:26:43.533719 systemd-logind[1488]: Session 13 logged out. Waiting for processes to exit. May 16 00:26:43.538697 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:41540.service - OpenSSH per-connection server daemon (10.0.0.1:41540). May 16 00:26:43.540369 systemd-logind[1488]: Removed session 13. May 16 00:26:43.592678 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 41540 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:26:43.594214 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:26:43.598734 systemd-logind[1488]: New session 14 of user core. May 16 00:26:43.608484 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 00:26:43.723664 sshd[4119]: Connection closed by 10.0.0.1 port 41540 May 16 00:26:43.723933 sshd-session[4116]: pam_unix(sshd:session): session closed for user core May 16 00:26:43.728127 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:41540.service: Deactivated successfully. May 16 00:26:43.730818 systemd[1]: session-14.scope: Deactivated successfully. May 16 00:26:43.731515 systemd-logind[1488]: Session 14 logged out. Waiting for processes to exit. May 16 00:26:43.732320 systemd-logind[1488]: Removed session 14. May 16 00:26:48.735959 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:58328.service - OpenSSH per-connection server daemon (10.0.0.1:58328). May 16 00:26:48.787528 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 58328 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:26:48.788962 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:26:48.793108 systemd-logind[1488]: New session 15 of user core. May 16 00:26:48.806500 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 00:26:48.920229 sshd[4136]: Connection closed by 10.0.0.1 port 58328 May 16 00:26:48.920573 sshd-session[4134]: pam_unix(sshd:session): session closed for user core May 16 00:26:48.924855 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:58328.service: Deactivated successfully. May 16 00:26:48.926908 systemd[1]: session-15.scope: Deactivated successfully. May 16 00:26:48.927621 systemd-logind[1488]: Session 15 logged out. Waiting for processes to exit. May 16 00:26:48.928553 systemd-logind[1488]: Removed session 15. May 16 00:26:53.933332 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:58330.service - OpenSSH per-connection server daemon (10.0.0.1:58330). May 16 00:26:53.983939 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 58330 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:26:53.985338 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:26:53.990051 systemd-logind[1488]: New session 16 of user core. May 16 00:26:53.998504 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 00:26:54.108976 sshd[4152]: Connection closed by 10.0.0.1 port 58330 May 16 00:26:54.109418 sshd-session[4150]: pam_unix(sshd:session): session closed for user core May 16 00:26:54.123449 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:58330.service: Deactivated successfully. May 16 00:26:54.125593 systemd[1]: session-16.scope: Deactivated successfully. May 16 00:26:54.127075 systemd-logind[1488]: Session 16 logged out. Waiting for processes to exit. May 16 00:26:54.128467 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:58332.service - OpenSSH per-connection server daemon (10.0.0.1:58332). May 16 00:26:54.129877 systemd-logind[1488]: Removed session 16. May 16 00:26:54.181591 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 58332 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:26:54.183272 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:26:54.187868 systemd-logind[1488]: New session 17 of user core. May 16 00:26:54.199479 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 00:26:54.411077 sshd[4167]: Connection closed by 10.0.0.1 port 58332 May 16 00:26:54.411478 sshd-session[4164]: pam_unix(sshd:session): session closed for user core May 16 00:26:54.422939 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:58332.service: Deactivated successfully. May 16 00:26:54.424751 systemd[1]: session-17.scope: Deactivated successfully. May 16 00:26:54.426046 systemd-logind[1488]: Session 17 logged out. Waiting for processes to exit. May 16 00:26:54.427275 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:58348.service - OpenSSH per-connection server daemon (10.0.0.1:58348). May 16 00:26:54.428198 systemd-logind[1488]: Removed session 17. May 16 00:26:54.481049 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 58348 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:26:54.482466 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:26:54.486641 systemd-logind[1488]: New session 18 of user core. May 16 00:26:54.495474 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 00:26:55.892196 sshd[4180]: Connection closed by 10.0.0.1 port 58348 May 16 00:26:55.892654 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 16 00:26:55.911569 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:58348.service: Deactivated successfully. May 16 00:26:55.914183 systemd[1]: session-18.scope: Deactivated successfully. May 16 00:26:55.916709 systemd-logind[1488]: Session 18 logged out. Waiting for processes to exit. May 16 00:26:55.919014 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:58364.service - OpenSSH per-connection server daemon (10.0.0.1:58364). May 16 00:26:55.920010 systemd-logind[1488]: Removed session 18. May 16 00:26:55.969907 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 58364 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:26:55.971329 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:26:55.975511 systemd-logind[1488]: New session 19 of user core. May 16 00:26:55.986470 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 00:26:56.202444 sshd[4202]: Connection closed by 10.0.0.1 port 58364 May 16 00:26:56.202659 sshd-session[4199]: pam_unix(sshd:session): session closed for user core May 16 00:26:56.213118 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:58364.service: Deactivated successfully. May 16 00:26:56.215001 systemd[1]: session-19.scope: Deactivated successfully. May 16 00:26:56.216656 systemd-logind[1488]: Session 19 logged out. Waiting for processes to exit. May 16 00:26:56.218056 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:58378.service - OpenSSH per-connection server daemon (10.0.0.1:58378). May 16 00:26:56.219039 systemd-logind[1488]: Removed session 19. May 16 00:26:56.263466 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 58378 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:26:56.264888 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:26:56.269006 systemd-logind[1488]: New session 20 of user core. May 16 00:26:56.281450 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 00:26:56.389292 sshd[4215]: Connection closed by 10.0.0.1 port 58378 May 16 00:26:56.389658 sshd-session[4212]: pam_unix(sshd:session): session closed for user core May 16 00:26:56.393410 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:58378.service: Deactivated successfully. May 16 00:26:56.395565 systemd[1]: session-20.scope: Deactivated successfully. May 16 00:26:56.396242 systemd-logind[1488]: Session 20 logged out. Waiting for processes to exit. May 16 00:26:56.397016 systemd-logind[1488]: Removed session 20. May 16 00:27:01.403901 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:54176.service - OpenSSH per-connection server daemon (10.0.0.1:54176). May 16 00:27:01.455724 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 54176 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:27:01.457157 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:27:01.461289 systemd-logind[1488]: New session 21 of user core. May 16 00:27:01.468465 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 00:27:01.572983 sshd[4232]: Connection closed by 10.0.0.1 port 54176 May 16 00:27:01.573310 sshd-session[4230]: pam_unix(sshd:session): session closed for user core May 16 00:27:01.576955 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:54176.service: Deactivated successfully. May 16 00:27:01.579012 systemd[1]: session-21.scope: Deactivated successfully. May 16 00:27:01.579717 systemd-logind[1488]: Session 21 logged out. Waiting for processes to exit. May 16 00:27:01.580526 systemd-logind[1488]: Removed session 21. May 16 00:27:06.590356 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:54178.service - OpenSSH per-connection server daemon (10.0.0.1:54178). May 16 00:27:06.631167 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 54178 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:27:06.632946 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:27:06.637001 systemd-logind[1488]: New session 22 of user core. May 16 00:27:06.646483 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 00:27:06.757393 sshd[4252]: Connection closed by 10.0.0.1 port 54178 May 16 00:27:06.757755 sshd-session[4250]: pam_unix(sshd:session): session closed for user core May 16 00:27:06.761486 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:54178.service: Deactivated successfully. May 16 00:27:06.763584 systemd[1]: session-22.scope: Deactivated successfully. May 16 00:27:06.764282 systemd-logind[1488]: Session 22 logged out. Waiting for processes to exit. May 16 00:27:06.765112 systemd-logind[1488]: Removed session 22. May 16 00:27:11.769940 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:46348.service - OpenSSH per-connection server daemon (10.0.0.1:46348). May 16 00:27:11.821122 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 46348 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:27:11.822530 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:27:11.826203 systemd-logind[1488]: New session 23 of user core. May 16 00:27:11.841463 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 00:27:11.947949 kubelet[2689]: E0516 00:27:11.947918 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:11.995318 sshd[4267]: Connection closed by 10.0.0.1 port 46348 May 16 00:27:11.995675 sshd-session[4265]: pam_unix(sshd:session): session closed for user core May 16 00:27:11.999098 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:46348.service: Deactivated successfully. May 16 00:27:12.000885 systemd[1]: session-23.scope: Deactivated successfully. May 16 00:27:12.001591 systemd-logind[1488]: Session 23 logged out. Waiting for processes to exit. May 16 00:27:12.002327 systemd-logind[1488]: Removed session 23. May 16 00:27:13.947933 kubelet[2689]: E0516 00:27:13.947883 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:17.008363 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:46354.service - OpenSSH per-connection server daemon (10.0.0.1:46354). May 16 00:27:17.056588 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 46354 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:27:17.058172 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:27:17.062180 systemd-logind[1488]: New session 24 of user core. May 16 00:27:17.070474 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 00:27:17.184985 sshd[4282]: Connection closed by 10.0.0.1 port 46354 May 16 00:27:17.185333 sshd-session[4280]: pam_unix(sshd:session): session closed for user core May 16 00:27:17.203197 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:46354.service: Deactivated successfully. May 16 00:27:17.205070 systemd[1]: session-24.scope: Deactivated successfully. May 16 00:27:17.206594 systemd-logind[1488]: Session 24 logged out. Waiting for processes to exit. May 16 00:27:17.207890 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:46358.service - OpenSSH per-connection server daemon (10.0.0.1:46358). May 16 00:27:17.209073 systemd-logind[1488]: Removed session 24. May 16 00:27:17.263568 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 46358 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:27:17.265156 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:27:17.269838 systemd-logind[1488]: New session 25 of user core. May 16 00:27:17.279483 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 00:27:18.605675 containerd[1500]: time="2025-05-16T00:27:18.605620626Z" level=info msg="StopContainer for \"41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68\" with timeout 30 (s)" May 16 00:27:18.608958 containerd[1500]: time="2025-05-16T00:27:18.608925037Z" level=info msg="Stop container \"41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68\" with signal terminated" May 16 00:27:18.621542 systemd[1]: cri-containerd-41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68.scope: Deactivated successfully. May 16 00:27:18.623442 containerd[1500]: time="2025-05-16T00:27:18.623366254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68\" id:\"41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68\" pid:3312 exited_at:{seconds:1747355238 nanos:622501802}" May 16 00:27:18.623614 containerd[1500]: time="2025-05-16T00:27:18.623522131Z" level=info msg="received exit event container_id:\"41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68\" id:\"41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68\" pid:3312 exited_at:{seconds:1747355238 nanos:622501802}" May 16 00:27:18.634441 containerd[1500]: time="2025-05-16T00:27:18.634406077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\" id:\"9c615554b2a43074de5b5044bf7981b79ff86685dd4f2bc703a6b8b57662a176\" pid:4321 exited_at:{seconds:1747355238 nanos:633973876}" May 16 00:27:18.636055 containerd[1500]: time="2025-05-16T00:27:18.636033401Z" level=info msg="StopContainer for \"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\" with timeout 2 (s)" May 16 00:27:18.636358 containerd[1500]: time="2025-05-16T00:27:18.636325906Z" level=info msg="Stop container \"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\" with signal terminated" May 16 00:27:18.644221 containerd[1500]: time="2025-05-16T00:27:18.643777679Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:27:18.645550 systemd-networkd[1417]: lxc_health: Link DOWN May 16 00:27:18.645560 systemd-networkd[1417]: lxc_health: Lost carrier May 16 00:27:18.650820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68-rootfs.mount: Deactivated successfully. May 16 00:27:18.667828 systemd[1]: cri-containerd-c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76.scope: Deactivated successfully. May 16 00:27:18.668422 containerd[1500]: time="2025-05-16T00:27:18.668371894Z" level=info msg="received exit event container_id:\"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\" id:\"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\" pid:3356 exited_at:{seconds:1747355238 nanos:668116910}" May 16 00:27:18.668422 containerd[1500]: time="2025-05-16T00:27:18.668446145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\" id:\"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\" pid:3356 exited_at:{seconds:1747355238 nanos:668116910}" May 16 00:27:18.668639 systemd[1]: cri-containerd-c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76.scope: Consumed 6.857s CPU time, 124.9M memory peak, 216K read from disk, 13.3M written to disk. May 16 00:27:18.674617 containerd[1500]: time="2025-05-16T00:27:18.674577839Z" level=info msg="StopContainer for \"41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68\" returns successfully" May 16 00:27:18.675189 containerd[1500]: time="2025-05-16T00:27:18.675149385Z" level=info msg="StopPodSandbox for \"7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47\"" May 16 00:27:18.675230 containerd[1500]: time="2025-05-16T00:27:18.675206675Z" level=info msg="Container to stop \"41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:27:18.683330 systemd[1]: cri-containerd-7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47.scope: Deactivated successfully. May 16 00:27:18.691319 containerd[1500]: time="2025-05-16T00:27:18.691270987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47\" id:\"7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47\" pid:2865 exit_status:137 exited_at:{seconds:1747355238 nanos:690960627}" May 16 00:27:18.692242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76-rootfs.mount: Deactivated successfully. May 16 00:27:18.701042 containerd[1500]: time="2025-05-16T00:27:18.700992193Z" level=info msg="StopContainer for \"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\" returns successfully" May 16 00:27:18.701913 containerd[1500]: time="2025-05-16T00:27:18.701878768Z" level=info msg="StopPodSandbox for \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\"" May 16 00:27:18.702001 containerd[1500]: time="2025-05-16T00:27:18.701972156Z" level=info msg="Container to stop \"c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:27:18.702001 containerd[1500]: time="2025-05-16T00:27:18.701993105Z" level=info msg="Container to stop \"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:27:18.702063 containerd[1500]: time="2025-05-16T00:27:18.702031057Z" level=info msg="Container to stop \"369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:27:18.702063 containerd[1500]: time="2025-05-16T00:27:18.702046387Z" level=info msg="Container to stop \"8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:27:18.702063 containerd[1500]: time="2025-05-16T00:27:18.702055694Z" level=info msg="Container to stop \"f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:27:18.709954 systemd[1]: cri-containerd-73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57.scope: Deactivated successfully. May 16 00:27:18.726535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47-rootfs.mount: Deactivated successfully. May 16 00:27:18.733794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57-rootfs.mount: Deactivated successfully. May 16 00:27:18.743979 containerd[1500]: time="2025-05-16T00:27:18.743924261Z" level=info msg="shim disconnected" id=7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47 namespace=k8s.io May 16 00:27:18.743979 containerd[1500]: time="2025-05-16T00:27:18.743971460Z" level=warning msg="cleaning up after shim disconnected" id=7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47 namespace=k8s.io May 16 00:27:18.748715 containerd[1500]: time="2025-05-16T00:27:18.743984877Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:27:18.749968 containerd[1500]: time="2025-05-16T00:27:18.749902473Z" level=info msg="shim disconnected" id=73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57 namespace=k8s.io May 16 00:27:18.749968 containerd[1500]: time="2025-05-16T00:27:18.749931268Z" level=warning msg="cleaning up after shim disconnected" id=73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57 namespace=k8s.io May 16 00:27:18.749968 containerd[1500]: time="2025-05-16T00:27:18.749941327Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:27:18.777313 containerd[1500]: time="2025-05-16T00:27:18.775238929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" id:\"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" pid:2866 exit_status:137 exited_at:{seconds:1747355238 nanos:710868114}" May 16 00:27:18.777306 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47-shm.mount: Deactivated successfully. May 16 00:27:18.777441 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57-shm.mount: Deactivated successfully. May 16 00:27:18.789214 containerd[1500]: time="2025-05-16T00:27:18.789177130Z" level=info msg="TearDown network for sandbox \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" successfully" May 16 00:27:18.789606 containerd[1500]: time="2025-05-16T00:27:18.789356400Z" level=info msg="StopPodSandbox for \"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" returns successfully" May 16 00:27:18.790200 containerd[1500]: time="2025-05-16T00:27:18.790144709Z" level=info msg="TearDown network for sandbox \"7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47\" successfully" May 16 00:27:18.790502 containerd[1500]: time="2025-05-16T00:27:18.790372322Z" level=info msg="StopPodSandbox for \"7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47\" returns successfully" May 16 00:27:18.795141 containerd[1500]: time="2025-05-16T00:27:18.795106780Z" level=info msg="received exit event sandbox_id:\"73509e37e5f33e34afc07b751f35702b26abafc58d325a2e11711657fc44ad57\" exit_status:137 exited_at:{seconds:1747355238 nanos:710868114}" May 16 00:27:18.798558 containerd[1500]: time="2025-05-16T00:27:18.796689748Z" level=info msg="received exit event sandbox_id:\"7d1eff70d650383fe39c96bb6916f4f40b3e66108bee557f7cbaa5d92f810b47\" exit_status:137 exited_at:{seconds:1747355238 nanos:690960627}" May 16 00:27:18.857060 kubelet[2689]: I0516 00:27:18.856777 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlkfg\" (UniqueName: \"kubernetes.io/projected/ee92a339-c113-4289-aa60-1c4951386171-kube-api-access-wlkfg\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.857060 kubelet[2689]: I0516 00:27:18.856830 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-bpf-maps\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.857060 kubelet[2689]: I0516 00:27:18.856856 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6-cilium-config-path\") pod \"0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6\" (UID: \"0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6\") " May 16 00:27:18.857060 kubelet[2689]: I0516 00:27:18.856875 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-cilium-cgroup\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.857060 kubelet[2689]: I0516 00:27:18.856891 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-host-proc-sys-net\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.857060 kubelet[2689]: I0516 00:27:18.856907 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-xtables-lock\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.857828 kubelet[2689]: I0516 00:27:18.856920 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-lib-modules\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.857828 kubelet[2689]: I0516 00:27:18.856936 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-hostproc\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.857828 kubelet[2689]: I0516 00:27:18.856954 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee92a339-c113-4289-aa60-1c4951386171-clustermesh-secrets\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.857828 kubelet[2689]: I0516 00:27:18.856971 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjnzl\" (UniqueName: \"kubernetes.io/projected/0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6-kube-api-access-mjnzl\") pod \"0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6\" (UID: \"0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6\") " May 16 00:27:18.857828 kubelet[2689]: I0516 00:27:18.856989 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-host-proc-sys-kernel\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.857828 kubelet[2689]: I0516 00:27:18.857004 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee92a339-c113-4289-aa60-1c4951386171-hubble-tls\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.858099 kubelet[2689]: I0516 00:27:18.857017 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-etc-cni-netd\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.858099 kubelet[2689]: I0516 00:27:18.857034 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee92a339-c113-4289-aa60-1c4951386171-cilium-config-path\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.858099 kubelet[2689]: I0516 00:27:18.857052 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-cni-path\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.858099 kubelet[2689]: I0516 00:27:18.857067 2689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-cilium-run\") pod \"ee92a339-c113-4289-aa60-1c4951386171\" (UID: \"ee92a339-c113-4289-aa60-1c4951386171\") " May 16 00:27:18.858099 kubelet[2689]: I0516 00:27:18.857116 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:27:18.858099 kubelet[2689]: I0516 00:27:18.857152 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:27:18.858572 kubelet[2689]: I0516 00:27:18.857168 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:27:18.858572 kubelet[2689]: I0516 00:27:18.857183 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:27:18.858572 kubelet[2689]: I0516 00:27:18.857199 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:27:18.858572 kubelet[2689]: I0516 00:27:18.857212 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-hostproc" (OuterVolumeSpecName: "hostproc") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:27:18.860727 kubelet[2689]: I0516 00:27:18.860700 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6" (UID: "0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:27:18.860770 kubelet[2689]: I0516 00:27:18.860741 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:27:18.861584 kubelet[2689]: I0516 00:27:18.861556 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee92a339-c113-4289-aa60-1c4951386171-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 00:27:18.861991 kubelet[2689]: I0516 00:27:18.861757 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:27:18.861991 kubelet[2689]: I0516 00:27:18.861766 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:27:18.861991 kubelet[2689]: I0516 00:27:18.861826 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-cni-path" (OuterVolumeSpecName: "cni-path") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:27:18.861991 kubelet[2689]: I0516 00:27:18.861900 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee92a339-c113-4289-aa60-1c4951386171-kube-api-access-wlkfg" (OuterVolumeSpecName: "kube-api-access-wlkfg") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "kube-api-access-wlkfg". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:27:18.863765 kubelet[2689]: I0516 00:27:18.863718 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee92a339-c113-4289-aa60-1c4951386171-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:27:18.863765 kubelet[2689]: I0516 00:27:18.863623 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6-kube-api-access-mjnzl" (OuterVolumeSpecName: "kube-api-access-mjnzl") pod "0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6" (UID: "0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6"). InnerVolumeSpecName "kube-api-access-mjnzl". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:27:18.865244 kubelet[2689]: I0516 00:27:18.865214 2689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee92a339-c113-4289-aa60-1c4951386171-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ee92a339-c113-4289-aa60-1c4951386171" (UID: "ee92a339-c113-4289-aa60-1c4951386171"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:27:18.957520 kubelet[2689]: I0516 00:27:18.957471 2689 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957520 kubelet[2689]: I0516 00:27:18.957531 2689 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957731 kubelet[2689]: I0516 00:27:18.957545 2689 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957731 kubelet[2689]: I0516 00:27:18.957558 2689 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957731 kubelet[2689]: I0516 00:27:18.957566 2689 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957731 kubelet[2689]: I0516 00:27:18.957574 2689 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957731 kubelet[2689]: I0516 00:27:18.957582 2689 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957731 kubelet[2689]: I0516 00:27:18.957590 2689 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee92a339-c113-4289-aa60-1c4951386171-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957731 kubelet[2689]: I0516 00:27:18.957606 2689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjnzl\" (UniqueName: \"kubernetes.io/projected/0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6-kube-api-access-mjnzl\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957731 kubelet[2689]: I0516 00:27:18.957614 2689 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957919 kubelet[2689]: I0516 00:27:18.957622 2689 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee92a339-c113-4289-aa60-1c4951386171-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957919 kubelet[2689]: I0516 00:27:18.957629 2689 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957919 kubelet[2689]: I0516 00:27:18.957637 2689 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee92a339-c113-4289-aa60-1c4951386171-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957919 kubelet[2689]: I0516 00:27:18.957645 2689 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957919 kubelet[2689]: I0516 00:27:18.957652 2689 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee92a339-c113-4289-aa60-1c4951386171-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 00:27:18.957919 kubelet[2689]: I0516 00:27:18.957661 2689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlkfg\" (UniqueName: \"kubernetes.io/projected/ee92a339-c113-4289-aa60-1c4951386171-kube-api-access-wlkfg\") on node \"localhost\" DevicePath \"\"" May 16 00:27:19.650707 systemd[1]: var-lib-kubelet-pods-0e1fb7ae\x2d1a27\x2d4b10\x2d81ff\x2d3d3b6289e6b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmjnzl.mount: Deactivated successfully. May 16 00:27:19.650823 systemd[1]: var-lib-kubelet-pods-ee92a339\x2dc113\x2d4289\x2daa60\x2d1c4951386171-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwlkfg.mount: Deactivated successfully. May 16 00:27:19.650904 systemd[1]: var-lib-kubelet-pods-ee92a339\x2dc113\x2d4289\x2daa60\x2d1c4951386171-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:27:19.650983 systemd[1]: var-lib-kubelet-pods-ee92a339\x2dc113\x2d4289\x2daa60\x2d1c4951386171-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:27:19.804801 kubelet[2689]: I0516 00:27:19.804771 2689 scope.go:117] "RemoveContainer" containerID="41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68" May 16 00:27:19.808464 containerd[1500]: time="2025-05-16T00:27:19.808432683Z" level=info msg="RemoveContainer for \"41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68\"" May 16 00:27:19.812382 systemd[1]: Removed slice kubepods-besteffort-pod0e1fb7ae_1a27_4b10_81ff_3d3b6289e6b6.slice - libcontainer container kubepods-besteffort-pod0e1fb7ae_1a27_4b10_81ff_3d3b6289e6b6.slice. May 16 00:27:19.815989 systemd[1]: Removed slice kubepods-burstable-podee92a339_c113_4289_aa60_1c4951386171.slice - libcontainer container kubepods-burstable-podee92a339_c113_4289_aa60_1c4951386171.slice. May 16 00:27:19.816197 systemd[1]: kubepods-burstable-podee92a339_c113_4289_aa60_1c4951386171.slice: Consumed 6.981s CPU time, 125.5M memory peak, 352K read from disk, 13.3M written to disk. May 16 00:27:19.825942 containerd[1500]: time="2025-05-16T00:27:19.825903470Z" level=info msg="RemoveContainer for \"41236831f6a838259885c7749525a16fb35f2c40de811101de58c282d3958b68\" returns successfully" May 16 00:27:19.826175 kubelet[2689]: I0516 00:27:19.826153 2689 scope.go:117] "RemoveContainer" containerID="c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76" May 16 00:27:19.828076 containerd[1500]: time="2025-05-16T00:27:19.828038037Z" level=info msg="RemoveContainer for \"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\"" May 16 00:27:19.835977 containerd[1500]: time="2025-05-16T00:27:19.835939290Z" level=info msg="RemoveContainer for \"c6fb25d52247fc185d9f7829770682280e82392276cd82fb42e962dd5650db76\" returns successfully" May 16 00:27:19.836140 kubelet[2689]: I0516 00:27:19.836108 2689 scope.go:117] "RemoveContainer" containerID="8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e" May 16 00:27:19.837159 containerd[1500]: time="2025-05-16T00:27:19.837140301Z" level=info msg="RemoveContainer for \"8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e\"" May 16 00:27:19.841249 containerd[1500]: time="2025-05-16T00:27:19.841221476Z" level=info msg="RemoveContainer for \"8498ea1bde43c78010e449e2b72939c6680084a27c8ecac86ea2229d8a2ab24e\" returns successfully" May 16 00:27:19.841378 kubelet[2689]: I0516 00:27:19.841357 2689 scope.go:117] "RemoveContainer" containerID="c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f" May 16 00:27:19.843129 containerd[1500]: time="2025-05-16T00:27:19.843108602Z" level=info msg="RemoveContainer for \"c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f\"" May 16 00:27:19.847126 containerd[1500]: time="2025-05-16T00:27:19.847107170Z" level=info msg="RemoveContainer for \"c026bc5eb04b0020c02741a182eb53543ec9192470071e53ae869697fd1b834f\" returns successfully" May 16 00:27:19.847232 kubelet[2689]: I0516 00:27:19.847207 2689 scope.go:117] "RemoveContainer" containerID="369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d" May 16 00:27:19.848645 containerd[1500]: time="2025-05-16T00:27:19.848248838Z" level=info msg="RemoveContainer for \"369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d\"" May 16 00:27:19.851696 containerd[1500]: time="2025-05-16T00:27:19.851671492Z" level=info msg="RemoveContainer for \"369ca9ad0aa5ceb97156aaa9e501bdbad76df50bc24c9f695a253ac005d50a0d\" returns successfully" May 16 00:27:19.851813 kubelet[2689]: I0516 00:27:19.851791 2689 scope.go:117] "RemoveContainer" containerID="f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48" May 16 00:27:19.852728 containerd[1500]: time="2025-05-16T00:27:19.852704605Z" level=info msg="RemoveContainer for \"f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48\"" May 16 00:27:19.855898 containerd[1500]: time="2025-05-16T00:27:19.855872885Z" level=info msg="RemoveContainer for \"f46dd114cde3946cecab712cadc6f4809335fb15245aa98a8604b89b209e7e48\" returns successfully" May 16 00:27:19.951691 kubelet[2689]: I0516 00:27:19.951578 2689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6" path="/var/lib/kubelet/pods/0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6/volumes" May 16 00:27:19.952266 kubelet[2689]: I0516 00:27:19.952239 2689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee92a339-c113-4289-aa60-1c4951386171" path="/var/lib/kubelet/pods/ee92a339-c113-4289-aa60-1c4951386171/volumes" May 16 00:27:20.572317 sshd[4297]: Connection closed by 10.0.0.1 port 46358 May 16 00:27:20.572711 sshd-session[4294]: pam_unix(sshd:session): session closed for user core May 16 00:27:20.583021 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:46358.service: Deactivated successfully. May 16 00:27:20.584920 systemd[1]: session-25.scope: Deactivated successfully. May 16 00:27:20.586489 systemd-logind[1488]: Session 25 logged out. Waiting for processes to exit. May 16 00:27:20.587976 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:33490.service - OpenSSH per-connection server daemon (10.0.0.1:33490). May 16 00:27:20.588944 systemd-logind[1488]: Removed session 25. May 16 00:27:20.636263 sshd[4449]: Accepted publickey for core from 10.0.0.1 port 33490 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:27:20.637552 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:27:20.641461 systemd-logind[1488]: New session 26 of user core. May 16 00:27:20.651464 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 00:27:21.475596 sshd[4452]: Connection closed by 10.0.0.1 port 33490 May 16 00:27:21.477672 sshd-session[4449]: pam_unix(sshd:session): session closed for user core May 16 00:27:21.489195 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:33490.service: Deactivated successfully. May 16 00:27:21.493546 systemd[1]: session-26.scope: Deactivated successfully. May 16 00:27:21.496084 systemd-logind[1488]: Session 26 logged out. Waiting for processes to exit. May 16 00:27:21.498979 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:33504.service - OpenSSH per-connection server daemon (10.0.0.1:33504). May 16 00:27:21.500978 systemd-logind[1488]: Removed session 26. May 16 00:27:21.502794 kubelet[2689]: E0516 00:27:21.502147 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee92a339-c113-4289-aa60-1c4951386171" containerName="cilium-agent" May 16 00:27:21.502794 kubelet[2689]: E0516 00:27:21.502167 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee92a339-c113-4289-aa60-1c4951386171" containerName="mount-cgroup" May 16 00:27:21.502794 kubelet[2689]: E0516 00:27:21.502173 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee92a339-c113-4289-aa60-1c4951386171" containerName="mount-bpf-fs" May 16 00:27:21.502794 kubelet[2689]: E0516 00:27:21.502179 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee92a339-c113-4289-aa60-1c4951386171" containerName="clean-cilium-state" May 16 00:27:21.502794 kubelet[2689]: E0516 00:27:21.502186 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee92a339-c113-4289-aa60-1c4951386171" containerName="apply-sysctl-overwrites" May 16 00:27:21.502794 kubelet[2689]: E0516 00:27:21.502192 2689 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6" containerName="cilium-operator" May 16 00:27:21.502794 kubelet[2689]: I0516 00:27:21.502212 2689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee92a339-c113-4289-aa60-1c4951386171" containerName="cilium-agent" May 16 00:27:21.502794 kubelet[2689]: I0516 00:27:21.502219 2689 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1fb7ae-1a27-4b10-81ff-3d3b6289e6b6" containerName="cilium-operator" May 16 00:27:21.518498 systemd[1]: Created slice kubepods-burstable-pod89386ead_3950_4592_bd45_515f0d98726b.slice - libcontainer container kubepods-burstable-pod89386ead_3950_4592_bd45_515f0d98726b.slice. May 16 00:27:21.555381 sshd[4463]: Accepted publickey for core from 10.0.0.1 port 33504 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:27:21.556903 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:27:21.561398 systemd-logind[1488]: New session 27 of user core. May 16 00:27:21.568569 kubelet[2689]: I0516 00:27:21.568529 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89386ead-3950-4592-bd45-515f0d98726b-cni-path\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.568569 kubelet[2689]: I0516 00:27:21.568561 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89386ead-3950-4592-bd45-515f0d98726b-host-proc-sys-kernel\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.568795 kubelet[2689]: I0516 00:27:21.568586 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8twrh\" (UniqueName: \"kubernetes.io/projected/89386ead-3950-4592-bd45-515f0d98726b-kube-api-access-8twrh\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.568795 kubelet[2689]: I0516 00:27:21.568603 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89386ead-3950-4592-bd45-515f0d98726b-clustermesh-secrets\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.568795 kubelet[2689]: I0516 00:27:21.568661 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89386ead-3950-4592-bd45-515f0d98726b-cilium-cgroup\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.568795 kubelet[2689]: I0516 00:27:21.568695 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/89386ead-3950-4592-bd45-515f0d98726b-cilium-ipsec-secrets\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.568795 kubelet[2689]: I0516 00:27:21.568739 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89386ead-3950-4592-bd45-515f0d98726b-hostproc\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.568795 kubelet[2689]: I0516 00:27:21.568761 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89386ead-3950-4592-bd45-515f0d98726b-lib-modules\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.569010 kubelet[2689]: I0516 00:27:21.568778 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89386ead-3950-4592-bd45-515f0d98726b-xtables-lock\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.569010 kubelet[2689]: I0516 00:27:21.568804 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89386ead-3950-4592-bd45-515f0d98726b-cilium-config-path\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.569010 kubelet[2689]: I0516 00:27:21.568827 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89386ead-3950-4592-bd45-515f0d98726b-host-proc-sys-net\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.569010 kubelet[2689]: I0516 00:27:21.568846 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89386ead-3950-4592-bd45-515f0d98726b-bpf-maps\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.569010 kubelet[2689]: I0516 00:27:21.568863 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89386ead-3950-4592-bd45-515f0d98726b-etc-cni-netd\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.569010 kubelet[2689]: I0516 00:27:21.568881 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89386ead-3950-4592-bd45-515f0d98726b-hubble-tls\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.569221 kubelet[2689]: I0516 00:27:21.568898 2689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89386ead-3950-4592-bd45-515f0d98726b-cilium-run\") pod \"cilium-xtztv\" (UID: \"89386ead-3950-4592-bd45-515f0d98726b\") " pod="kube-system/cilium-xtztv" May 16 00:27:21.569514 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 00:27:21.620668 sshd[4466]: Connection closed by 10.0.0.1 port 33504 May 16 00:27:21.620961 sshd-session[4463]: pam_unix(sshd:session): session closed for user core May 16 00:27:21.639807 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:33504.service: Deactivated successfully. May 16 00:27:21.641940 systemd[1]: session-27.scope: Deactivated successfully. May 16 00:27:21.643597 systemd-logind[1488]: Session 27 logged out. Waiting for processes to exit. May 16 00:27:21.645042 systemd[1]: Started sshd@27-10.0.0.13:22-10.0.0.1:33518.service - OpenSSH per-connection server daemon (10.0.0.1:33518). May 16 00:27:21.646457 systemd-logind[1488]: Removed session 27. May 16 00:27:21.702206 sshd[4472]: Accepted publickey for core from 10.0.0.1 port 33518 ssh2: RSA SHA256:4OrBIk3c4YqkoKp27/ZIXpxEKeoT8r5gTHlZ2uMhobs May 16 00:27:21.703833 sshd-session[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:27:21.708205 systemd-logind[1488]: New session 28 of user core. May 16 00:27:21.715526 systemd[1]: Started session-28.scope - Session 28 of User core. May 16 00:27:21.822447 kubelet[2689]: E0516 00:27:21.822296 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:21.824080 containerd[1500]: time="2025-05-16T00:27:21.824039266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xtztv,Uid:89386ead-3950-4592-bd45-515f0d98726b,Namespace:kube-system,Attempt:0,}" May 16 00:27:21.845317 containerd[1500]: time="2025-05-16T00:27:21.843978433Z" level=info msg="connecting to shim 44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b" address="unix:///run/containerd/s/a63357716efdedba5ac14f925655c7fe32d919a914f53e2d448fbfebc48de4ec" namespace=k8s.io protocol=ttrpc version=3 May 16 00:27:21.872538 systemd[1]: Started cri-containerd-44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b.scope - libcontainer container 44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b. May 16 00:27:21.896094 containerd[1500]: time="2025-05-16T00:27:21.896042165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xtztv,Uid:89386ead-3950-4592-bd45-515f0d98726b,Namespace:kube-system,Attempt:0,} returns sandbox id \"44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b\"" May 16 00:27:21.896693 kubelet[2689]: E0516 00:27:21.896660 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:21.898790 containerd[1500]: time="2025-05-16T00:27:21.898763864Z" level=info msg="CreateContainer within sandbox \"44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:27:21.910006 containerd[1500]: time="2025-05-16T00:27:21.909971107Z" level=info msg="Container 4b37122d38abf53aeb2265f8657961b1b18b2611ae76c5ecb5dbe1474195500c: CDI devices from CRI Config.CDIDevices: []" May 16 00:27:21.918452 containerd[1500]: time="2025-05-16T00:27:21.918416786Z" level=info msg="CreateContainer within sandbox \"44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4b37122d38abf53aeb2265f8657961b1b18b2611ae76c5ecb5dbe1474195500c\"" May 16 00:27:21.919796 containerd[1500]: time="2025-05-16T00:27:21.918822657Z" level=info msg="StartContainer for \"4b37122d38abf53aeb2265f8657961b1b18b2611ae76c5ecb5dbe1474195500c\"" May 16 00:27:21.919796 containerd[1500]: time="2025-05-16T00:27:21.919571398Z" level=info msg="connecting to shim 4b37122d38abf53aeb2265f8657961b1b18b2611ae76c5ecb5dbe1474195500c" address="unix:///run/containerd/s/a63357716efdedba5ac14f925655c7fe32d919a914f53e2d448fbfebc48de4ec" protocol=ttrpc version=3 May 16 00:27:21.939512 systemd[1]: Started cri-containerd-4b37122d38abf53aeb2265f8657961b1b18b2611ae76c5ecb5dbe1474195500c.scope - libcontainer container 4b37122d38abf53aeb2265f8657961b1b18b2611ae76c5ecb5dbe1474195500c. May 16 00:27:21.970648 containerd[1500]: time="2025-05-16T00:27:21.970055421Z" level=info msg="StartContainer for \"4b37122d38abf53aeb2265f8657961b1b18b2611ae76c5ecb5dbe1474195500c\" returns successfully" May 16 00:27:21.977377 systemd[1]: cri-containerd-4b37122d38abf53aeb2265f8657961b1b18b2611ae76c5ecb5dbe1474195500c.scope: Deactivated successfully. May 16 00:27:21.978813 containerd[1500]: time="2025-05-16T00:27:21.978454261Z" level=info msg="received exit event container_id:\"4b37122d38abf53aeb2265f8657961b1b18b2611ae76c5ecb5dbe1474195500c\" id:\"4b37122d38abf53aeb2265f8657961b1b18b2611ae76c5ecb5dbe1474195500c\" pid:4548 exited_at:{seconds:1747355241 nanos:978178978}" May 16 00:27:21.978875 containerd[1500]: time="2025-05-16T00:27:21.978839112Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b37122d38abf53aeb2265f8657961b1b18b2611ae76c5ecb5dbe1474195500c\" id:\"4b37122d38abf53aeb2265f8657961b1b18b2611ae76c5ecb5dbe1474195500c\" pid:4548 exited_at:{seconds:1747355241 nanos:978178978}" May 16 00:27:22.676185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount353216672.mount: Deactivated successfully. May 16 00:27:22.821019 kubelet[2689]: E0516 00:27:22.820872 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:22.822434 containerd[1500]: time="2025-05-16T00:27:22.822384156Z" level=info msg="CreateContainer within sandbox \"44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:27:22.850417 containerd[1500]: time="2025-05-16T00:27:22.850324092Z" level=info msg="Container 1ce40fd5ce5fd7d72d076edf140ce823be8faad0f1bf2c2ff957dc5a8bdc0c46: CDI devices from CRI Config.CDIDevices: []" May 16 00:27:22.859482 containerd[1500]: time="2025-05-16T00:27:22.859420061Z" level=info msg="CreateContainer within sandbox \"44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1ce40fd5ce5fd7d72d076edf140ce823be8faad0f1bf2c2ff957dc5a8bdc0c46\"" May 16 00:27:22.860091 containerd[1500]: time="2025-05-16T00:27:22.860068132Z" level=info msg="StartContainer for \"1ce40fd5ce5fd7d72d076edf140ce823be8faad0f1bf2c2ff957dc5a8bdc0c46\"" May 16 00:27:22.861004 containerd[1500]: time="2025-05-16T00:27:22.860981756Z" level=info msg="connecting to shim 1ce40fd5ce5fd7d72d076edf140ce823be8faad0f1bf2c2ff957dc5a8bdc0c46" address="unix:///run/containerd/s/a63357716efdedba5ac14f925655c7fe32d919a914f53e2d448fbfebc48de4ec" protocol=ttrpc version=3 May 16 00:27:22.880501 systemd[1]: Started cri-containerd-1ce40fd5ce5fd7d72d076edf140ce823be8faad0f1bf2c2ff957dc5a8bdc0c46.scope - libcontainer container 1ce40fd5ce5fd7d72d076edf140ce823be8faad0f1bf2c2ff957dc5a8bdc0c46. May 16 00:27:22.913013 containerd[1500]: time="2025-05-16T00:27:22.912890379Z" level=info msg="StartContainer for \"1ce40fd5ce5fd7d72d076edf140ce823be8faad0f1bf2c2ff957dc5a8bdc0c46\" returns successfully" May 16 00:27:22.916425 systemd[1]: cri-containerd-1ce40fd5ce5fd7d72d076edf140ce823be8faad0f1bf2c2ff957dc5a8bdc0c46.scope: Deactivated successfully. May 16 00:27:22.917390 containerd[1500]: time="2025-05-16T00:27:22.917354083Z" level=info msg="received exit event container_id:\"1ce40fd5ce5fd7d72d076edf140ce823be8faad0f1bf2c2ff957dc5a8bdc0c46\" id:\"1ce40fd5ce5fd7d72d076edf140ce823be8faad0f1bf2c2ff957dc5a8bdc0c46\" pid:4591 exited_at:{seconds:1747355242 nanos:916979863}" May 16 00:27:22.917462 containerd[1500]: time="2025-05-16T00:27:22.917428475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ce40fd5ce5fd7d72d076edf140ce823be8faad0f1bf2c2ff957dc5a8bdc0c46\" id:\"1ce40fd5ce5fd7d72d076edf140ce823be8faad0f1bf2c2ff957dc5a8bdc0c46\" pid:4591 exited_at:{seconds:1747355242 nanos:916979863}" May 16 00:27:22.938292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ce40fd5ce5fd7d72d076edf140ce823be8faad0f1bf2c2ff957dc5a8bdc0c46-rootfs.mount: Deactivated successfully. May 16 00:27:22.947697 kubelet[2689]: E0516 00:27:22.947675 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:23.003318 kubelet[2689]: E0516 00:27:23.003271 2689 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:27:23.828097 kubelet[2689]: E0516 00:27:23.828057 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:23.829932 containerd[1500]: time="2025-05-16T00:27:23.829879317Z" level=info msg="CreateContainer within sandbox \"44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:27:23.905143 containerd[1500]: time="2025-05-16T00:27:23.904321145Z" level=info msg="Container a658c231f78fae7ac5d0e04178bdef7f7d8ab59a7d9d9d468cd8401ea0958fac: CDI devices from CRI Config.CDIDevices: []" May 16 00:27:23.921217 containerd[1500]: time="2025-05-16T00:27:23.921173713Z" level=info msg="CreateContainer within sandbox \"44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a658c231f78fae7ac5d0e04178bdef7f7d8ab59a7d9d9d468cd8401ea0958fac\"" May 16 00:27:23.921768 containerd[1500]: time="2025-05-16T00:27:23.921729027Z" level=info msg="StartContainer for \"a658c231f78fae7ac5d0e04178bdef7f7d8ab59a7d9d9d468cd8401ea0958fac\"" May 16 00:27:23.923274 containerd[1500]: time="2025-05-16T00:27:23.923228663Z" level=info msg="connecting to shim a658c231f78fae7ac5d0e04178bdef7f7d8ab59a7d9d9d468cd8401ea0958fac" address="unix:///run/containerd/s/a63357716efdedba5ac14f925655c7fe32d919a914f53e2d448fbfebc48de4ec" protocol=ttrpc version=3 May 16 00:27:23.944469 systemd[1]: Started cri-containerd-a658c231f78fae7ac5d0e04178bdef7f7d8ab59a7d9d9d468cd8401ea0958fac.scope - libcontainer container a658c231f78fae7ac5d0e04178bdef7f7d8ab59a7d9d9d468cd8401ea0958fac. May 16 00:27:23.983622 systemd[1]: cri-containerd-a658c231f78fae7ac5d0e04178bdef7f7d8ab59a7d9d9d468cd8401ea0958fac.scope: Deactivated successfully. May 16 00:27:23.984049 containerd[1500]: time="2025-05-16T00:27:23.984012508Z" level=info msg="StartContainer for \"a658c231f78fae7ac5d0e04178bdef7f7d8ab59a7d9d9d468cd8401ea0958fac\" returns successfully" May 16 00:27:23.986535 containerd[1500]: time="2025-05-16T00:27:23.986487745Z" level=info msg="received exit event container_id:\"a658c231f78fae7ac5d0e04178bdef7f7d8ab59a7d9d9d468cd8401ea0958fac\" id:\"a658c231f78fae7ac5d0e04178bdef7f7d8ab59a7d9d9d468cd8401ea0958fac\" pid:4635 exited_at:{seconds:1747355243 nanos:986313034}" May 16 00:27:23.986663 containerd[1500]: time="2025-05-16T00:27:23.986572887Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a658c231f78fae7ac5d0e04178bdef7f7d8ab59a7d9d9d468cd8401ea0958fac\" id:\"a658c231f78fae7ac5d0e04178bdef7f7d8ab59a7d9d9d468cd8401ea0958fac\" pid:4635 exited_at:{seconds:1747355243 nanos:986313034}" May 16 00:27:24.005296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a658c231f78fae7ac5d0e04178bdef7f7d8ab59a7d9d9d468cd8401ea0958fac-rootfs.mount: Deactivated successfully. May 16 00:27:24.833976 kubelet[2689]: E0516 00:27:24.833941 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:24.837534 containerd[1500]: time="2025-05-16T00:27:24.837480471Z" level=info msg="CreateContainer within sandbox \"44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:27:25.011194 containerd[1500]: time="2025-05-16T00:27:25.011118378Z" level=info msg="Container c778de709a4f6c6865a36723eba2b4c43fbf6469d9b4553120e8e95d7c1cd357: CDI devices from CRI Config.CDIDevices: []" May 16 00:27:25.026699 containerd[1500]: time="2025-05-16T00:27:25.026544204Z" level=info msg="CreateContainer within sandbox \"44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c778de709a4f6c6865a36723eba2b4c43fbf6469d9b4553120e8e95d7c1cd357\"" May 16 00:27:25.027404 containerd[1500]: time="2025-05-16T00:27:25.027325226Z" level=info msg="StartContainer for \"c778de709a4f6c6865a36723eba2b4c43fbf6469d9b4553120e8e95d7c1cd357\"" May 16 00:27:25.028596 containerd[1500]: time="2025-05-16T00:27:25.028541133Z" level=info msg="connecting to shim c778de709a4f6c6865a36723eba2b4c43fbf6469d9b4553120e8e95d7c1cd357" address="unix:///run/containerd/s/a63357716efdedba5ac14f925655c7fe32d919a914f53e2d448fbfebc48de4ec" protocol=ttrpc version=3 May 16 00:27:25.061251 systemd[1]: Started cri-containerd-c778de709a4f6c6865a36723eba2b4c43fbf6469d9b4553120e8e95d7c1cd357.scope - libcontainer container c778de709a4f6c6865a36723eba2b4c43fbf6469d9b4553120e8e95d7c1cd357. May 16 00:27:25.101046 systemd[1]: cri-containerd-c778de709a4f6c6865a36723eba2b4c43fbf6469d9b4553120e8e95d7c1cd357.scope: Deactivated successfully. May 16 00:27:25.102168 containerd[1500]: time="2025-05-16T00:27:25.102084638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c778de709a4f6c6865a36723eba2b4c43fbf6469d9b4553120e8e95d7c1cd357\" id:\"c778de709a4f6c6865a36723eba2b4c43fbf6469d9b4553120e8e95d7c1cd357\" pid:4675 exited_at:{seconds:1747355245 nanos:101409147}" May 16 00:27:25.108967 containerd[1500]: time="2025-05-16T00:27:25.108459353Z" level=info msg="received exit event container_id:\"c778de709a4f6c6865a36723eba2b4c43fbf6469d9b4553120e8e95d7c1cd357\" id:\"c778de709a4f6c6865a36723eba2b4c43fbf6469d9b4553120e8e95d7c1cd357\" pid:4675 exited_at:{seconds:1747355245 nanos:101409147}" May 16 00:27:25.120760 containerd[1500]: time="2025-05-16T00:27:25.120688705Z" level=info msg="StartContainer for \"c778de709a4f6c6865a36723eba2b4c43fbf6469d9b4553120e8e95d7c1cd357\" returns successfully" May 16 00:27:25.137419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c778de709a4f6c6865a36723eba2b4c43fbf6469d9b4553120e8e95d7c1cd357-rootfs.mount: Deactivated successfully. May 16 00:27:25.839179 kubelet[2689]: E0516 00:27:25.839147 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:25.840753 containerd[1500]: time="2025-05-16T00:27:25.840684356Z" level=info msg="CreateContainer within sandbox \"44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:27:25.850436 containerd[1500]: time="2025-05-16T00:27:25.850403225Z" level=info msg="Container eff64006166a4ae4ce0296cb46f028fa564cfd1adb512a570b7587377c483637: CDI devices from CRI Config.CDIDevices: []" May 16 00:27:25.858733 containerd[1500]: time="2025-05-16T00:27:25.858696629Z" level=info msg="CreateContainer within sandbox \"44c17fa62236db82dd6f9a0aaddaa02a10218e076bc437c13ee669fa460e025b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eff64006166a4ae4ce0296cb46f028fa564cfd1adb512a570b7587377c483637\"" May 16 00:27:25.859173 containerd[1500]: time="2025-05-16T00:27:25.859131413Z" level=info msg="StartContainer for \"eff64006166a4ae4ce0296cb46f028fa564cfd1adb512a570b7587377c483637\"" May 16 00:27:25.859991 containerd[1500]: time="2025-05-16T00:27:25.859965886Z" level=info msg="connecting to shim eff64006166a4ae4ce0296cb46f028fa564cfd1adb512a570b7587377c483637" address="unix:///run/containerd/s/a63357716efdedba5ac14f925655c7fe32d919a914f53e2d448fbfebc48de4ec" protocol=ttrpc version=3 May 16 00:27:25.879489 systemd[1]: Started cri-containerd-eff64006166a4ae4ce0296cb46f028fa564cfd1adb512a570b7587377c483637.scope - libcontainer container eff64006166a4ae4ce0296cb46f028fa564cfd1adb512a570b7587377c483637. May 16 00:27:25.912560 containerd[1500]: time="2025-05-16T00:27:25.912514922Z" level=info msg="StartContainer for \"eff64006166a4ae4ce0296cb46f028fa564cfd1adb512a570b7587377c483637\" returns successfully" May 16 00:27:25.977850 containerd[1500]: time="2025-05-16T00:27:25.977782318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eff64006166a4ae4ce0296cb46f028fa564cfd1adb512a570b7587377c483637\" id:\"489bff0cf66952997e0bd3f41fb38ad0886aac1af73649d8edcb8a2dd90e1ece\" pid:4743 exited_at:{seconds:1747355245 nanos:976647614}" May 16 00:27:26.309374 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 16 00:27:26.844279 kubelet[2689]: E0516 00:27:26.844248 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:27.845774 kubelet[2689]: E0516 00:27:27.845744 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:28.093748 containerd[1500]: time="2025-05-16T00:27:28.093699107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eff64006166a4ae4ce0296cb46f028fa564cfd1adb512a570b7587377c483637\" id:\"8303fd31f9e5c8e9f38c988b5143d2072f4d4477dc6fa5e80444b3f873f10e32\" pid:4935 exit_status:1 exited_at:{seconds:1747355248 nanos:93290041}" May 16 00:27:29.338649 systemd-networkd[1417]: lxc_health: Link UP May 16 00:27:29.340245 systemd-networkd[1417]: lxc_health: Gained carrier May 16 00:27:29.824237 kubelet[2689]: E0516 00:27:29.824191 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:29.854028 kubelet[2689]: I0516 00:27:29.852795 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xtztv" podStartSLOduration=8.85277689 podStartE2EDuration="8.85277689s" podCreationTimestamp="2025-05-16 00:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:27:26.856605096 +0000 UTC m=+89.012055963" watchObservedRunningTime="2025-05-16 00:27:29.85277689 +0000 UTC m=+92.008227747" May 16 00:27:29.854028 kubelet[2689]: E0516 00:27:29.853492 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:30.189772 containerd[1500]: time="2025-05-16T00:27:30.189721493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eff64006166a4ae4ce0296cb46f028fa564cfd1adb512a570b7587377c483637\" id:\"829e9edce86f02644ee729332fd3202bcc24cc1be25080bc80992b62fc2006da\" pid:5302 exited_at:{seconds:1747355250 nanos:189302749}" May 16 00:27:30.640573 systemd-networkd[1417]: lxc_health: Gained IPv6LL May 16 00:27:30.854591 kubelet[2689]: E0516 00:27:30.854556 2689 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:27:32.280006 containerd[1500]: time="2025-05-16T00:27:32.279959756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eff64006166a4ae4ce0296cb46f028fa564cfd1adb512a570b7587377c483637\" id:\"86de61b3fc6cb25b7440535ce99de42b9c7b678495f273410135f315da610f84\" pid:5336 exited_at:{seconds:1747355252 nanos:279626775}" May 16 00:27:34.354122 containerd[1500]: time="2025-05-16T00:27:34.354069375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eff64006166a4ae4ce0296cb46f028fa564cfd1adb512a570b7587377c483637\" id:\"4424369b35086f20b3d2b4ee9f5ef78fd8e536d4aeb6615f23aacfe647fb4727\" pid:5370 exited_at:{seconds:1747355254 nanos:353759398}" May 16 00:27:34.369117 sshd[4479]: Connection closed by 10.0.0.1 port 33518 May 16 00:27:34.369312 sshd-session[4472]: pam_unix(sshd:session): session closed for user core May 16 00:27:34.373273 systemd[1]: sshd@27-10.0.0.13:22-10.0.0.1:33518.service: Deactivated successfully. May 16 00:27:34.375309 systemd[1]: session-28.scope: Deactivated successfully. May 16 00:27:34.376043 systemd-logind[1488]: Session 28 logged out. Waiting for processes to exit. May 16 00:27:34.376936 systemd-logind[1488]: Removed session 28.