May 17 00:11:23.873633 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:11:23.873655 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:11:23.873665 kernel: BIOS-provided physical RAM map: May 17 00:11:23.873672 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:11:23.873678 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:11:23.873684 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:11:23.873691 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 17 00:11:23.873697 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 17 00:11:23.873703 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:11:23.873712 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:11:23.873718 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:11:23.873724 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:11:23.873730 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:11:23.873736 kernel: NX (Execute Disable) protection: active May 17 00:11:23.873744 kernel: APIC: Static calls initialized May 17 00:11:23.873753 kernel: SMBIOS 2.8 present. May 17 00:11:23.873759 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 17 00:11:23.873766 kernel: Hypervisor detected: KVM May 17 00:11:23.873773 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:11:23.873779 kernel: kvm-clock: using sched offset of 2231275064 cycles May 17 00:11:23.873786 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:11:23.873793 kernel: tsc: Detected 2794.748 MHz processor May 17 00:11:23.873800 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:11:23.873807 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:11:23.873814 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 17 00:11:23.873824 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:11:23.873831 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:11:23.873838 kernel: Using GB pages for direct mapping May 17 00:11:23.873844 kernel: ACPI: Early table checksum verification disabled May 17 00:11:23.873851 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 17 00:11:23.873858 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:23.873865 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:23.873872 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:23.873881 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 17 00:11:23.873888 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:23.873895 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:23.873901 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:23.873908 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:23.873915 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 17 00:11:23.873922 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 17 00:11:23.873933 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 17 00:11:23.873942 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 17 00:11:23.873949 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 17 00:11:23.873956 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 17 00:11:23.873963 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 17 00:11:23.873970 kernel: No NUMA configuration found May 17 00:11:23.873977 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 17 00:11:23.873984 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 17 00:11:23.873993 kernel: Zone ranges: May 17 00:11:23.874000 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:11:23.874007 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 17 00:11:23.874014 kernel: Normal empty May 17 00:11:23.874021 kernel: Movable zone start for each node May 17 00:11:23.874028 kernel: Early memory node ranges May 17 00:11:23.874036 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:11:23.874043 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 17 00:11:23.874050 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 17 00:11:23.874059 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:11:23.874066 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:11:23.874074 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 17 00:11:23.874081 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:11:23.874088 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:11:23.874095 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:11:23.874102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:11:23.874109 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:11:23.874117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:11:23.874126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:11:23.874133 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:11:23.874140 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:11:23.874147 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:11:23.874154 kernel: TSC deadline timer available May 17 00:11:23.874161 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 17 00:11:23.874169 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:11:23.874176 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:11:23.874183 kernel: kvm-guest: setup PV sched yield May 17 00:11:23.874192 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:11:23.874199 kernel: Booting paravirtualized kernel on KVM May 17 00:11:23.874206 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:11:23.874213 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 17 00:11:23.874221 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 17 00:11:23.874228 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 17 00:11:23.874242 kernel: pcpu-alloc: [0] 0 1 2 3 May 17 00:11:23.874249 kernel: kvm-guest: PV spinlocks enabled May 17 00:11:23.874256 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:11:23.874265 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:11:23.874275 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:11:23.874282 kernel: random: crng init done May 17 00:11:23.874289 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:11:23.874296 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:11:23.874303 kernel: Fallback order for Node 0: 0 May 17 00:11:23.874311 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 17 00:11:23.874318 kernel: Policy zone: DMA32 May 17 00:11:23.874325 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:11:23.874335 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 136900K reserved, 0K cma-reserved) May 17 00:11:23.874342 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 17 00:11:23.874349 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:11:23.874356 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:11:23.874363 kernel: Dynamic Preempt: voluntary May 17 00:11:23.874390 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:11:23.874398 kernel: rcu: RCU event tracing is enabled. May 17 00:11:23.874414 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 17 00:11:23.874429 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:11:23.874447 kernel: Rude variant of Tasks RCU enabled. May 17 00:11:23.874454 kernel: Tracing variant of Tasks RCU enabled. May 17 00:11:23.874461 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:11:23.874469 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 17 00:11:23.874476 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 17 00:11:23.874483 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:11:23.874490 kernel: Console: colour VGA+ 80x25 May 17 00:11:23.874497 kernel: printk: console [ttyS0] enabled May 17 00:11:23.874504 kernel: ACPI: Core revision 20230628 May 17 00:11:23.874513 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:11:23.874521 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:11:23.874528 kernel: x2apic enabled May 17 00:11:23.874535 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:11:23.874542 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 17 00:11:23.874549 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 17 00:11:23.874556 kernel: kvm-guest: setup PV IPIs May 17 00:11:23.874573 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:11:23.874580 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:11:23.874587 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 17 00:11:23.874595 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:11:23.874602 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:11:23.874612 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:11:23.874619 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:11:23.874627 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:11:23.874634 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:11:23.874644 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 00:11:23.874651 kernel: RETBleed: Mitigation: untrained return thunk May 17 00:11:23.874659 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:11:23.874666 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:11:23.874674 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 17 00:11:23.874682 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 17 00:11:23.874689 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 17 00:11:23.874697 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:11:23.874704 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:11:23.874714 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:11:23.874722 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:11:23.874729 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 17 00:11:23.874737 kernel: Freeing SMP alternatives memory: 32K May 17 00:11:23.874744 kernel: pid_max: default: 32768 minimum: 301 May 17 00:11:23.874752 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:11:23.874759 kernel: landlock: Up and running. May 17 00:11:23.874766 kernel: SELinux: Initializing. May 17 00:11:23.874774 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:11:23.874784 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:11:23.874792 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 00:11:23.874799 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:11:23.874807 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:11:23.874814 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:11:23.874822 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:11:23.874829 kernel: ... version: 0 May 17 00:11:23.874836 kernel: ... bit width: 48 May 17 00:11:23.874846 kernel: ... generic registers: 6 May 17 00:11:23.874853 kernel: ... value mask: 0000ffffffffffff May 17 00:11:23.874861 kernel: ... max period: 00007fffffffffff May 17 00:11:23.874868 kernel: ... fixed-purpose events: 0 May 17 00:11:23.874876 kernel: ... event mask: 000000000000003f May 17 00:11:23.874883 kernel: signal: max sigframe size: 1776 May 17 00:11:23.874890 kernel: rcu: Hierarchical SRCU implementation. May 17 00:11:23.874898 kernel: rcu: Max phase no-delay instances is 400. May 17 00:11:23.874905 kernel: smp: Bringing up secondary CPUs ... May 17 00:11:23.874913 kernel: smpboot: x86: Booting SMP configuration: May 17 00:11:23.874923 kernel: .... node #0, CPUs: #1 #2 #3 May 17 00:11:23.874930 kernel: smp: Brought up 1 node, 4 CPUs May 17 00:11:23.874937 kernel: smpboot: Max logical packages: 1 May 17 00:11:23.874945 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 17 00:11:23.874952 kernel: devtmpfs: initialized May 17 00:11:23.874959 kernel: x86/mm: Memory block size: 128MB May 17 00:11:23.874967 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:11:23.874975 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 17 00:11:23.874982 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:11:23.874992 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:11:23.874999 kernel: audit: initializing netlink subsys (disabled) May 17 00:11:23.875007 kernel: audit: type=2000 audit(1747440683.899:1): state=initialized audit_enabled=0 res=1 May 17 00:11:23.875014 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:11:23.875021 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:11:23.875029 kernel: cpuidle: using governor menu May 17 00:11:23.875036 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:11:23.875043 kernel: dca service started, version 1.12.1 May 17 00:11:23.875051 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:11:23.875060 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 17 00:11:23.875068 kernel: PCI: Using configuration type 1 for base access May 17 00:11:23.875075 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:11:23.875083 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:11:23.875090 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:11:23.875098 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:11:23.875105 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:11:23.875113 kernel: ACPI: Added _OSI(Module Device) May 17 00:11:23.875120 kernel: ACPI: Added _OSI(Processor Device) May 17 00:11:23.875129 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:11:23.875137 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:11:23.875144 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:11:23.875152 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:11:23.875159 kernel: ACPI: Interpreter enabled May 17 00:11:23.875166 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:11:23.875174 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:11:23.875181 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:11:23.875189 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:11:23.875198 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:11:23.875206 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:11:23.875406 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:11:23.875539 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:11:23.875660 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:11:23.875670 kernel: PCI host bridge to bus 0000:00 May 17 00:11:23.875794 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:11:23.875910 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:11:23.876019 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:11:23.876127 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 17 00:11:23.876244 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:11:23.876354 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 17 00:11:23.876484 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:11:23.876626 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:11:23.876760 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:11:23.876880 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:11:23.877000 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:11:23.877118 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:11:23.877245 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:11:23.877388 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 17 00:11:23.877517 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 17 00:11:23.877636 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:11:23.877755 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:11:23.877883 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 17 00:11:23.878002 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:11:23.878122 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:11:23.878249 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:11:23.878402 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:11:23.878527 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 17 00:11:23.878648 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 17 00:11:23.878766 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 17 00:11:23.878883 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:11:23.879010 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:11:23.879128 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:11:23.879268 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:11:23.879428 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 17 00:11:23.879550 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 17 00:11:23.879684 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:11:23.879804 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:11:23.879814 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:11:23.879822 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:11:23.879834 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:11:23.879842 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:11:23.879849 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:11:23.879857 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:11:23.879864 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:11:23.879872 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:11:23.879880 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:11:23.879887 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:11:23.879894 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:11:23.879905 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:11:23.879912 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:11:23.879920 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:11:23.879927 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:11:23.879935 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:11:23.879942 kernel: iommu: Default domain type: Translated May 17 00:11:23.879950 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:11:23.879957 kernel: PCI: Using ACPI for IRQ routing May 17 00:11:23.879965 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:11:23.879975 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:11:23.879982 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 17 00:11:23.880104 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:11:23.880224 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:11:23.880359 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:11:23.880387 kernel: vgaarb: loaded May 17 00:11:23.880396 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:11:23.880403 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:11:23.880414 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:11:23.880422 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:11:23.880430 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:11:23.880438 kernel: pnp: PnP ACPI init May 17 00:11:23.880569 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:11:23.880580 kernel: pnp: PnP ACPI: found 6 devices May 17 00:11:23.880588 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:11:23.880596 kernel: NET: Registered PF_INET protocol family May 17 00:11:23.880607 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:11:23.880614 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:11:23.880622 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:11:23.880630 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:11:23.880637 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:11:23.880645 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:11:23.880653 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:11:23.880660 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:11:23.880668 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:11:23.880677 kernel: NET: Registered PF_XDP protocol family May 17 00:11:23.880788 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:11:23.880898 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:11:23.881007 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:11:23.881118 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 17 00:11:23.881226 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:11:23.881346 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 17 00:11:23.881356 kernel: PCI: CLS 0 bytes, default 64 May 17 00:11:23.881430 kernel: Initialise system trusted keyrings May 17 00:11:23.881438 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:11:23.881446 kernel: Key type asymmetric registered May 17 00:11:23.881453 kernel: Asymmetric key parser 'x509' registered May 17 00:11:23.881461 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:11:23.881468 kernel: io scheduler mq-deadline registered May 17 00:11:23.881480 kernel: io scheduler kyber registered May 17 00:11:23.881487 kernel: io scheduler bfq registered May 17 00:11:23.881495 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:11:23.881506 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:11:23.881513 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:11:23.881521 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 17 00:11:23.881528 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:11:23.881536 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:11:23.881544 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:11:23.881551 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:11:23.881559 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:11:23.881684 kernel: rtc_cmos 00:04: RTC can wake from S4 May 17 00:11:23.881697 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:11:23.881808 kernel: rtc_cmos 00:04: registered as rtc0 May 17 00:11:23.881919 kernel: rtc_cmos 00:04: setting system clock to 2025-05-17T00:11:23 UTC (1747440683) May 17 00:11:23.882029 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:11:23.882039 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:11:23.882046 kernel: NET: Registered PF_INET6 protocol family May 17 00:11:23.882054 kernel: Segment Routing with IPv6 May 17 00:11:23.882061 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:11:23.882072 kernel: NET: Registered PF_PACKET protocol family May 17 00:11:23.882080 kernel: Key type dns_resolver registered May 17 00:11:23.882087 kernel: IPI shorthand broadcast: enabled May 17 00:11:23.882095 kernel: sched_clock: Marking stable (546001839, 104907311)->(695010130, -44100980) May 17 00:11:23.882103 kernel: registered taskstats version 1 May 17 00:11:23.882110 kernel: Loading compiled-in X.509 certificates May 17 00:11:23.882118 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:11:23.882125 kernel: Key type .fscrypt registered May 17 00:11:23.882133 kernel: Key type fscrypt-provisioning registered May 17 00:11:23.882143 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:11:23.882150 kernel: ima: Allocated hash algorithm: sha1 May 17 00:11:23.882157 kernel: ima: No architecture policies found May 17 00:11:23.882165 kernel: clk: Disabling unused clocks May 17 00:11:23.882172 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:11:23.882180 kernel: Write protecting the kernel read-only data: 36864k May 17 00:11:23.882187 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:11:23.882195 kernel: Run /init as init process May 17 00:11:23.882202 kernel: with arguments: May 17 00:11:23.882212 kernel: /init May 17 00:11:23.882219 kernel: with environment: May 17 00:11:23.882226 kernel: HOME=/ May 17 00:11:23.882242 kernel: TERM=linux May 17 00:11:23.882249 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:11:23.882259 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:11:23.882269 systemd[1]: Detected virtualization kvm. May 17 00:11:23.882277 systemd[1]: Detected architecture x86-64. May 17 00:11:23.882288 systemd[1]: Running in initrd. May 17 00:11:23.882295 systemd[1]: No hostname configured, using default hostname. May 17 00:11:23.882303 systemd[1]: Hostname set to . May 17 00:11:23.882312 systemd[1]: Initializing machine ID from VM UUID. May 17 00:11:23.882320 systemd[1]: Queued start job for default target initrd.target. May 17 00:11:23.882328 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:11:23.882336 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:11:23.882345 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:11:23.882356 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:11:23.882389 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:11:23.882400 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:11:23.882410 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:11:23.882420 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:11:23.882429 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:11:23.882437 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:11:23.882445 systemd[1]: Reached target paths.target - Path Units. May 17 00:11:23.882453 systemd[1]: Reached target slices.target - Slice Units. May 17 00:11:23.882461 systemd[1]: Reached target swap.target - Swaps. May 17 00:11:23.882470 systemd[1]: Reached target timers.target - Timer Units. May 17 00:11:23.882478 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:11:23.882486 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:11:23.882496 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:11:23.882505 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:11:23.882513 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:11:23.882521 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:11:23.882529 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:11:23.882538 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:11:23.882548 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:11:23.882556 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:11:23.882567 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:11:23.882575 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:11:23.882583 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:11:23.882592 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:11:23.882600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:11:23.882608 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:11:23.882617 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:11:23.882625 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:11:23.882636 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:11:23.882661 systemd-journald[191]: Collecting audit messages is disabled. May 17 00:11:23.882683 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:11:23.882694 systemd-journald[191]: Journal started May 17 00:11:23.882714 systemd-journald[191]: Runtime Journal (/run/log/journal/67c3af6996e8465298f16ecc2ba8c1e2) is 6.0M, max 48.4M, 42.3M free. May 17 00:11:23.870619 systemd-modules-load[194]: Inserted module 'overlay' May 17 00:11:23.910594 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:11:23.910609 kernel: Bridge firewalling registered May 17 00:11:23.910620 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:11:23.897061 systemd-modules-load[194]: Inserted module 'br_netfilter' May 17 00:11:23.910792 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:11:23.913010 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:11:23.926479 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:11:23.928276 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:11:23.929550 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:11:23.933187 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:11:23.942622 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:11:23.945128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:11:23.947020 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:11:23.949755 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:11:23.965531 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:11:23.968956 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:11:23.980117 dracut-cmdline[228]: dracut-dracut-053 May 17 00:11:23.983782 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:11:24.002391 systemd-resolved[232]: Positive Trust Anchors: May 17 00:11:24.002407 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:11:24.002437 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:11:24.005034 systemd-resolved[232]: Defaulting to hostname 'linux'. May 17 00:11:24.006058 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:11:24.012237 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:11:24.068398 kernel: SCSI subsystem initialized May 17 00:11:24.077393 kernel: Loading iSCSI transport class v2.0-870. May 17 00:11:24.087393 kernel: iscsi: registered transport (tcp) May 17 00:11:24.108395 kernel: iscsi: registered transport (qla4xxx) May 17 00:11:24.108416 kernel: QLogic iSCSI HBA Driver May 17 00:11:24.159728 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:11:24.173508 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:11:24.198211 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:11:24.198240 kernel: device-mapper: uevent: version 1.0.3 May 17 00:11:24.198265 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:11:24.239400 kernel: raid6: avx2x4 gen() 30663 MB/s May 17 00:11:24.256390 kernel: raid6: avx2x2 gen() 31504 MB/s May 17 00:11:24.273474 kernel: raid6: avx2x1 gen() 26068 MB/s May 17 00:11:24.273492 kernel: raid6: using algorithm avx2x2 gen() 31504 MB/s May 17 00:11:24.291481 kernel: raid6: .... xor() 19888 MB/s, rmw enabled May 17 00:11:24.291496 kernel: raid6: using avx2x2 recovery algorithm May 17 00:11:24.311393 kernel: xor: automatically using best checksumming function avx May 17 00:11:24.464396 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:11:24.477326 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:11:24.490535 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:11:24.504762 systemd-udevd[415]: Using default interface naming scheme 'v255'. May 17 00:11:24.510152 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:11:24.520495 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:11:24.536282 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation May 17 00:11:24.569195 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:11:24.582498 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:11:24.643335 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:11:24.653556 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:11:24.665130 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:11:24.668092 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:11:24.673994 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 17 00:11:24.670977 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:11:24.672891 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:11:24.687534 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 17 00:11:24.694204 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:11:24.692925 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:11:24.698978 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:11:24.699001 kernel: GPT:9289727 != 19775487 May 17 00:11:24.699016 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:11:24.699027 kernel: GPT:9289727 != 19775487 May 17 00:11:24.699965 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:11:24.699983 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:11:24.701389 kernel: libata version 3.00 loaded. May 17 00:11:24.702363 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:11:24.703601 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:11:24.706567 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:11:24.707794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:11:24.707913 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:11:24.712592 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:11:24.717292 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:11:24.717312 kernel: AES CTR mode by8 optimization enabled May 17 00:11:24.718729 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:11:24.718908 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:11:24.720250 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:11:24.720501 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:11:24.721569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:11:24.726280 kernel: scsi host0: ahci May 17 00:11:24.723773 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:11:24.729485 kernel: scsi host1: ahci May 17 00:11:24.734430 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (475) May 17 00:11:24.736396 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (463) May 17 00:11:24.745607 kernel: scsi host2: ahci May 17 00:11:24.748384 kernel: scsi host3: ahci May 17 00:11:24.752505 kernel: scsi host4: ahci May 17 00:11:24.752684 kernel: scsi host5: ahci May 17 00:11:24.752827 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 17 00:11:24.752839 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 17 00:11:24.752850 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 17 00:11:24.752860 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 17 00:11:24.752870 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 17 00:11:24.752880 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 17 00:11:24.752265 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 17 00:11:24.780662 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 17 00:11:24.780947 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:11:24.795429 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:11:24.799163 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 17 00:11:24.799247 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 17 00:11:24.814601 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:11:24.817807 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:11:24.825204 disk-uuid[555]: Primary Header is updated. May 17 00:11:24.825204 disk-uuid[555]: Secondary Entries is updated. May 17 00:11:24.825204 disk-uuid[555]: Secondary Header is updated. May 17 00:11:24.829397 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:11:24.833391 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:11:24.838408 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:11:24.843685 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:11:25.061576 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:11:25.061627 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 00:11:25.063116 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:11:25.063181 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:11:25.063192 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:11:25.064400 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 00:11:25.065395 kernel: ata3.00: applying bridge limits May 17 00:11:25.065409 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:11:25.066392 kernel: ata3.00: configured for UDMA/100 May 17 00:11:25.068400 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:11:25.115916 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 00:11:25.116137 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:11:25.130394 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 17 00:11:25.838398 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:11:25.839466 disk-uuid[557]: The operation has completed successfully. May 17 00:11:25.864666 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:11:25.864789 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:11:25.892500 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:11:25.895896 sh[595]: Success May 17 00:11:25.908416 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:11:25.940795 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:11:25.958797 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:11:25.961802 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:11:25.974852 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:11:25.974881 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:11:25.974893 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:11:25.975894 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:11:25.976642 kernel: BTRFS info (device dm-0): using free space tree May 17 00:11:25.982566 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:11:25.984462 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:11:26.004486 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:11:26.007028 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:11:26.017224 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:11:26.017252 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:11:26.017263 kernel: BTRFS info (device vda6): using free space tree May 17 00:11:26.021427 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:11:26.030907 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:11:26.032985 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:11:26.042846 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:11:26.051544 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:11:26.101636 ignition[689]: Ignition 2.19.0 May 17 00:11:26.101648 ignition[689]: Stage: fetch-offline May 17 00:11:26.101685 ignition[689]: no configs at "/usr/lib/ignition/base.d" May 17 00:11:26.101694 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:11:26.101801 ignition[689]: parsed url from cmdline: "" May 17 00:11:26.101806 ignition[689]: no config URL provided May 17 00:11:26.101811 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:11:26.101820 ignition[689]: no config at "/usr/lib/ignition/user.ign" May 17 00:11:26.101847 ignition[689]: op(1): [started] loading QEMU firmware config module May 17 00:11:26.101853 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" May 17 00:11:26.109824 ignition[689]: op(1): [finished] loading QEMU firmware config module May 17 00:11:26.133008 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:11:26.146501 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:11:26.152280 ignition[689]: parsing config with SHA512: 195fab5033b49af8bbcb1d99aecda22acce27c02ebf3fff3fe51d891fd8044605b6839447dfe8a6f8022b264e9ac65264e154a22e3619cc0892b131613ebdb90 May 17 00:11:26.156541 unknown[689]: fetched base config from "system" May 17 00:11:26.157338 ignition[689]: fetch-offline: fetch-offline passed May 17 00:11:26.156564 unknown[689]: fetched user config from "qemu" May 17 00:11:26.157444 ignition[689]: Ignition finished successfully May 17 00:11:26.159860 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:11:26.168514 systemd-networkd[784]: lo: Link UP May 17 00:11:26.168524 systemd-networkd[784]: lo: Gained carrier May 17 00:11:26.170024 systemd-networkd[784]: Enumeration completed May 17 00:11:26.170190 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:11:26.170496 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:11:26.170501 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:11:26.171670 systemd-networkd[784]: eth0: Link UP May 17 00:11:26.171673 systemd-networkd[784]: eth0: Gained carrier May 17 00:11:26.171680 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:11:26.172480 systemd[1]: Reached target network.target - Network. May 17 00:11:26.174391 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:11:26.185540 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:11:26.191438 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:11:26.200409 ignition[787]: Ignition 2.19.0 May 17 00:11:26.200421 ignition[787]: Stage: kargs May 17 00:11:26.200606 ignition[787]: no configs at "/usr/lib/ignition/base.d" May 17 00:11:26.200618 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:11:26.201510 ignition[787]: kargs: kargs passed May 17 00:11:26.201554 ignition[787]: Ignition finished successfully May 17 00:11:26.205241 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:11:26.212570 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:11:26.223904 ignition[796]: Ignition 2.19.0 May 17 00:11:26.223915 ignition[796]: Stage: disks May 17 00:11:26.224083 ignition[796]: no configs at "/usr/lib/ignition/base.d" May 17 00:11:26.224094 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:11:26.227870 ignition[796]: disks: disks passed May 17 00:11:26.227914 ignition[796]: Ignition finished successfully May 17 00:11:26.231172 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:11:26.231443 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:11:26.233132 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:11:26.235247 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:11:26.237581 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:11:26.237912 systemd[1]: Reached target basic.target - Basic System. May 17 00:11:26.253613 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:11:26.267773 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:11:26.274100 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:11:26.286469 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:11:26.372404 kernel: EXT4-fs (vda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:11:26.373092 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:11:26.373812 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:11:26.383461 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:11:26.384401 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:11:26.386233 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:11:26.386269 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:11:26.386287 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:11:26.396395 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:11:26.401395 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) May 17 00:11:26.401417 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:11:26.402807 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:11:26.403390 kernel: BTRFS info (device vda6): using free space tree May 17 00:11:26.404491 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:11:26.407202 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:11:26.408727 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:11:26.436464 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:11:26.440619 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory May 17 00:11:26.445278 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:11:26.449349 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:11:26.538022 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:11:26.544456 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:11:26.546503 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:11:26.552386 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:11:26.571655 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:11:26.576898 ignition[928]: INFO : Ignition 2.19.0 May 17 00:11:26.576898 ignition[928]: INFO : Stage: mount May 17 00:11:26.578629 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:11:26.578629 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:11:26.578629 ignition[928]: INFO : mount: mount passed May 17 00:11:26.578629 ignition[928]: INFO : Ignition finished successfully May 17 00:11:26.580461 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:11:26.592480 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:11:26.974569 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:11:26.987522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:11:26.994684 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) May 17 00:11:26.994721 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:11:26.994736 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:11:26.995548 kernel: BTRFS info (device vda6): using free space tree May 17 00:11:26.998405 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:11:27.000028 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:11:27.025697 ignition[959]: INFO : Ignition 2.19.0 May 17 00:11:27.025697 ignition[959]: INFO : Stage: files May 17 00:11:27.027415 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:11:27.027415 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:11:27.027415 ignition[959]: DEBUG : files: compiled without relabeling support, skipping May 17 00:11:27.031245 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:11:27.031245 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:11:27.031245 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:11:27.035591 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:11:27.035591 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:11:27.035591 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:11:27.035591 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:11:27.031775 unknown[959]: wrote ssh authorized keys file for user: core May 17 00:11:27.107787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:11:27.283512 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:11:27.283512 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:11:27.288204 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 00:11:27.574523 systemd-networkd[784]: eth0: Gained IPv6LL May 17 00:11:27.770624 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:11:27.870574 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:11:27.870574 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:11:27.874402 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:11:27.874402 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:11:27.874402 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:11:27.874402 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:11:27.874402 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:11:27.874402 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:11:27.874402 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:11:27.874402 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:11:27.874402 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:11:27.874402 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:11:27.874402 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:11:27.874402 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:11:27.874402 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:11:28.512239 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:11:28.938027 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:11:28.938027 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 17 00:11:28.941873 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:11:28.941873 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:11:28.941873 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 17 00:11:28.941873 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 17 00:11:28.941873 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:11:28.941873 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:11:28.941873 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 17 00:11:28.941873 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 17 00:11:28.963785 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:11:28.967922 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:11:28.969526 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 17 00:11:28.969526 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 17 00:11:28.969526 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:11:28.969526 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:11:28.969526 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:11:28.969526 ignition[959]: INFO : files: files passed May 17 00:11:28.969526 ignition[959]: INFO : Ignition finished successfully May 17 00:11:28.971189 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:11:28.984628 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:11:28.988500 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:11:28.991285 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:11:28.992306 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:11:28.997615 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory May 17 00:11:29.001787 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:11:29.001787 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:11:29.004956 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:11:29.007185 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:11:29.010251 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:11:29.021493 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:11:29.045744 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:11:29.046785 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:11:29.049458 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:11:29.051555 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:11:29.053608 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:11:29.064499 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:11:29.079461 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:11:29.097531 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:11:29.107213 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:11:29.109601 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:11:29.110910 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:11:29.112897 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:11:29.113026 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:11:29.115338 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:11:29.116935 systemd[1]: Stopped target basic.target - Basic System. May 17 00:11:29.118978 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:11:29.121061 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:11:29.123114 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:11:29.125287 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:11:29.127426 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:11:29.129731 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:11:29.131748 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:11:29.133962 systemd[1]: Stopped target swap.target - Swaps. May 17 00:11:29.135733 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:11:29.135866 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:11:29.138179 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:11:29.139664 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:11:29.141746 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:11:29.141884 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:11:29.144000 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:11:29.144127 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:11:29.146492 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:11:29.146615 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:11:29.148493 systemd[1]: Stopped target paths.target - Path Units. May 17 00:11:29.150219 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:11:29.155445 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:11:29.157044 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:11:29.158790 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:11:29.160777 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:11:29.160886 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:11:29.163223 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:11:29.163330 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:11:29.165091 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:11:29.165228 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:11:29.167210 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:11:29.167336 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:11:29.183505 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:11:29.184471 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:11:29.184627 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:11:29.187970 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:11:29.189711 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:11:29.189863 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:11:29.191905 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:11:29.192007 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:11:29.197267 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:11:29.197419 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:11:29.200920 ignition[1013]: INFO : Ignition 2.19.0 May 17 00:11:29.200920 ignition[1013]: INFO : Stage: umount May 17 00:11:29.200920 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:11:29.200920 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:11:29.206776 ignition[1013]: INFO : umount: umount passed May 17 00:11:29.206776 ignition[1013]: INFO : Ignition finished successfully May 17 00:11:29.203461 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:11:29.203583 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:11:29.205141 systemd[1]: Stopped target network.target - Network. May 17 00:11:29.206784 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:11:29.206845 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:11:29.208649 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:11:29.208696 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:11:29.210552 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:11:29.210597 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:11:29.212504 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:11:29.212552 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:11:29.214681 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:11:29.216805 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:11:29.218413 systemd-networkd[784]: eth0: DHCPv6 lease lost May 17 00:11:29.219759 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:11:29.220282 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:11:29.220414 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:11:29.222342 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:11:29.222428 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:11:29.228552 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:11:29.230255 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:11:29.230319 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:11:29.232824 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:11:29.234996 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:11:29.235115 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:11:29.240772 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:11:29.240857 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:11:29.251384 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:11:29.251447 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:11:29.254565 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:11:29.255575 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:11:29.258628 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:11:29.259668 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:11:29.261836 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:11:29.262887 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:11:29.266702 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:11:29.266763 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:11:29.269818 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:11:29.269865 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:11:29.272805 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:11:29.272859 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:11:29.275898 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:11:29.275951 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:11:29.278950 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:11:29.279934 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:11:29.299567 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:11:29.300679 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:11:29.300744 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:11:29.303079 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:11:29.303128 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:11:29.306641 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:11:29.306775 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:11:29.413249 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:11:29.414261 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:11:29.416315 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:11:29.418420 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:11:29.418473 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:11:29.433503 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:11:29.441836 systemd[1]: Switching root. May 17 00:11:29.474539 systemd-journald[191]: Journal stopped May 17 00:11:30.617835 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). May 17 00:11:30.617899 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:11:30.617913 kernel: SELinux: policy capability open_perms=1 May 17 00:11:30.617924 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:11:30.617935 kernel: SELinux: policy capability always_check_network=0 May 17 00:11:30.617949 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:11:30.617960 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:11:30.617971 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:11:30.617984 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:11:30.617995 kernel: audit: type=1403 audit(1747440689.904:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:11:30.618012 systemd[1]: Successfully loaded SELinux policy in 39.625ms. May 17 00:11:30.618036 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.081ms. May 17 00:11:30.618049 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:11:30.618064 systemd[1]: Detected virtualization kvm. May 17 00:11:30.618076 systemd[1]: Detected architecture x86-64. May 17 00:11:30.618088 systemd[1]: Detected first boot. May 17 00:11:30.618100 systemd[1]: Initializing machine ID from VM UUID. May 17 00:11:30.618112 zram_generator::config[1058]: No configuration found. May 17 00:11:30.618137 systemd[1]: Populated /etc with preset unit settings. May 17 00:11:30.618149 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:11:30.618167 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:11:30.618178 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:11:30.618194 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:11:30.618205 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:11:30.618217 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:11:30.618229 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:11:30.618241 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:11:30.618254 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:11:30.618266 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:11:30.618279 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:11:30.618293 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:11:30.618305 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:11:30.618318 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:11:30.618335 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:11:30.618348 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:11:30.618360 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:11:30.621398 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:11:30.621416 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:11:30.621429 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:11:30.621445 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:11:30.621457 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:11:30.621469 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:11:30.621482 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:11:30.621499 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:11:30.621511 systemd[1]: Reached target slices.target - Slice Units. May 17 00:11:30.621523 systemd[1]: Reached target swap.target - Swaps. May 17 00:11:30.621535 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:11:30.621549 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:11:30.621561 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:11:30.621573 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:11:30.621586 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:11:30.621598 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:11:30.621610 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:11:30.621622 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:11:30.621633 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:11:30.621645 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:11:30.621660 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:11:30.621672 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:11:30.621684 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:11:30.621697 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:11:30.621712 systemd[1]: Reached target machines.target - Containers. May 17 00:11:30.621727 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:11:30.621743 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:11:30.621759 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:11:30.621775 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:11:30.621787 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:11:30.621800 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:11:30.621811 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:11:30.621823 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:11:30.621835 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:11:30.621847 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:11:30.621861 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:11:30.621872 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:11:30.621887 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:11:30.621898 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:11:30.621911 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:11:30.621923 kernel: loop: module loaded May 17 00:11:30.621934 kernel: fuse: init (API version 7.39) May 17 00:11:30.621946 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:11:30.621958 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:11:30.621970 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:11:30.622004 systemd-journald[1128]: Collecting audit messages is disabled. May 17 00:11:30.622112 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:11:30.622134 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:11:30.622146 systemd[1]: Stopped verity-setup.service. May 17 00:11:30.622158 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:11:30.622170 systemd-journald[1128]: Journal started May 17 00:11:30.622250 systemd-journald[1128]: Runtime Journal (/run/log/journal/67c3af6996e8465298f16ecc2ba8c1e2) is 6.0M, max 48.4M, 42.3M free. May 17 00:11:30.401198 systemd[1]: Queued start job for default target multi-user.target. May 17 00:11:30.420531 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 17 00:11:30.420959 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:11:30.628894 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:11:30.630002 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:11:30.630411 kernel: ACPI: bus type drm_connector registered May 17 00:11:30.631762 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:11:30.632995 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:11:30.634098 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:11:30.635308 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:11:30.636621 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:11:30.638043 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:11:30.639543 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:11:30.641092 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:11:30.641297 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:11:30.642776 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:11:30.642971 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:11:30.644670 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:11:30.644861 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:11:30.646227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:11:30.646433 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:11:30.647947 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:11:30.648187 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:11:30.649879 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:11:30.650088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:11:30.651523 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:11:30.652933 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:11:30.654676 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:11:30.670525 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:11:30.680493 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:11:30.682991 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:11:30.684151 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:11:30.684183 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:11:30.686182 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:11:30.688545 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:11:30.692434 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:11:30.693970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:11:30.697486 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:11:30.701304 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:11:30.702518 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:11:30.704794 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:11:30.705932 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:11:30.708720 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:11:30.713976 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:11:30.716835 systemd-journald[1128]: Time spent on flushing to /var/log/journal/67c3af6996e8465298f16ecc2ba8c1e2 is 22.144ms for 953 entries. May 17 00:11:30.716835 systemd-journald[1128]: System Journal (/var/log/journal/67c3af6996e8465298f16ecc2ba8c1e2) is 8.0M, max 195.6M, 187.6M free. May 17 00:11:30.768049 systemd-journald[1128]: Received client request to flush runtime journal. May 17 00:11:30.768098 kernel: loop0: detected capacity change from 0 to 142488 May 17 00:11:30.718613 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:11:30.722174 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:11:30.724593 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:11:30.726158 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:11:30.727979 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:11:30.733708 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:11:30.736870 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:11:30.745512 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:11:30.761083 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:11:30.762751 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:11:30.772569 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:11:30.775206 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:11:30.773811 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:11:30.790006 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:11:30.799558 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:11:30.801394 kernel: loop1: detected capacity change from 0 to 140768 May 17 00:11:30.803975 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:11:30.806250 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:11:30.823040 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. May 17 00:11:30.823059 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. May 17 00:11:30.828997 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:11:30.834392 kernel: loop2: detected capacity change from 0 to 221472 May 17 00:11:30.871403 kernel: loop3: detected capacity change from 0 to 142488 May 17 00:11:30.884399 kernel: loop4: detected capacity change from 0 to 140768 May 17 00:11:30.893403 kernel: loop5: detected capacity change from 0 to 221472 May 17 00:11:30.900049 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 17 00:11:30.900654 (sd-merge)[1198]: Merged extensions into '/usr'. May 17 00:11:30.905551 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:11:30.905568 systemd[1]: Reloading... May 17 00:11:30.962405 zram_generator::config[1224]: No configuration found. May 17 00:11:31.025813 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:11:31.082900 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:11:31.131301 systemd[1]: Reloading finished in 225 ms. May 17 00:11:31.172780 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:11:31.174355 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:11:31.197712 systemd[1]: Starting ensure-sysext.service... May 17 00:11:31.200062 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:11:31.206635 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... May 17 00:11:31.206652 systemd[1]: Reloading... May 17 00:11:31.223587 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:11:31.224050 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:11:31.225076 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:11:31.225463 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. May 17 00:11:31.225565 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. May 17 00:11:31.229731 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:11:31.229746 systemd-tmpfiles[1262]: Skipping /boot May 17 00:11:31.243762 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:11:31.243780 systemd-tmpfiles[1262]: Skipping /boot May 17 00:11:31.268672 zram_generator::config[1292]: No configuration found. May 17 00:11:31.372163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:11:31.420825 systemd[1]: Reloading finished in 213 ms. May 17 00:11:31.437890 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:11:31.450801 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:11:31.459344 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:11:31.461906 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:11:31.464249 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:11:31.469390 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:11:31.472811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:11:31.476637 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:11:31.481005 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:11:31.481190 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:11:31.482311 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:11:31.489664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:11:31.494098 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:11:31.496540 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:11:31.498483 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:11:31.499649 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:11:31.500648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:11:31.501413 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:11:31.503699 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:11:31.505999 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:11:31.506280 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:11:31.508948 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:11:31.509144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:11:31.512518 systemd-udevd[1334]: Using default interface naming scheme 'v255'. May 17 00:11:31.521093 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:11:31.521763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:11:31.525650 augenrules[1357]: No rules May 17 00:11:31.530762 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:11:31.535386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:11:31.538057 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:11:31.542530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:11:31.546608 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:11:31.547930 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:11:31.548959 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:11:31.552931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:11:31.554986 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:11:31.557006 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:11:31.559300 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:11:31.559846 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:11:31.563134 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:11:31.565134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:11:31.565432 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:11:31.568614 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:11:31.568834 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:11:31.576534 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:11:31.593048 systemd[1]: Finished ensure-sysext.service. May 17 00:11:31.598250 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:11:31.598488 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:11:31.610710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:11:31.616620 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:11:31.619954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:11:31.624528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:11:31.625886 systemd-resolved[1332]: Positive Trust Anchors: May 17 00:11:31.625897 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:11:31.625928 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:11:31.626006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:11:31.628474 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:11:31.629783 systemd-resolved[1332]: Defaulting to hostname 'linux'. May 17 00:11:31.633165 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:11:31.634810 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:11:31.634838 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:11:31.635237 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:11:31.637233 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:11:31.637698 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:11:31.640729 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:11:31.640919 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:11:31.642492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:11:31.642653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:11:31.649978 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:11:31.651475 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1376) May 17 00:11:31.656230 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:11:31.660957 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:11:31.662607 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:11:31.662801 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:11:31.670448 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:11:31.683654 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:11:31.689333 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 00:11:31.690634 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:11:31.693422 kernel: ACPI: button: Power Button [PWRF] May 17 00:11:31.706685 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:11:31.713414 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 17 00:11:31.720167 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:11:31.721718 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:11:31.721906 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:11:31.742329 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:11:31.744731 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:11:31.748673 systemd-networkd[1406]: lo: Link UP May 17 00:11:31.748681 systemd-networkd[1406]: lo: Gained carrier May 17 00:11:31.753291 systemd-networkd[1406]: Enumeration completed May 17 00:11:31.753359 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:11:31.754887 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:11:31.754891 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:11:31.755818 systemd[1]: Reached target network.target - Network. May 17 00:11:31.757758 systemd-networkd[1406]: eth0: Link UP May 17 00:11:31.757767 systemd-networkd[1406]: eth0: Gained carrier May 17 00:11:31.757781 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:11:31.769861 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.20/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:11:31.771578 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. May 17 00:11:32.758593 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:11:32.753196 systemd-timesyncd[1409]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 17 00:11:32.753234 systemd-timesyncd[1409]: Initial clock synchronization to Sat 2025-05-17 00:11:32.753088 UTC. May 17 00:11:32.753267 systemd-resolved[1332]: Clock change detected. Flushing caches. May 17 00:11:32.757213 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:11:32.770550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:11:32.823743 kernel: kvm_amd: TSC scaling supported May 17 00:11:32.823839 kernel: kvm_amd: Nested Virtualization enabled May 17 00:11:32.823853 kernel: kvm_amd: Nested Paging enabled May 17 00:11:32.823865 kernel: kvm_amd: LBR virtualization supported May 17 00:11:32.824944 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 17 00:11:32.824958 kernel: kvm_amd: Virtual GIF supported May 17 00:11:32.845418 kernel: EDAC MC: Ver: 3.0.0 May 17 00:11:32.882741 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:11:32.884516 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:11:32.897602 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:11:32.908712 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:11:32.939227 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:11:32.941001 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:11:32.942241 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:11:32.943505 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:11:32.944872 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:11:32.946445 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:11:32.947706 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:11:32.949195 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:11:32.950547 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:11:32.950574 systemd[1]: Reached target paths.target - Path Units. May 17 00:11:32.951568 systemd[1]: Reached target timers.target - Timer Units. May 17 00:11:32.953412 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:11:32.956091 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:11:32.963021 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:11:32.965347 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:11:32.966950 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:11:32.968139 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:11:32.969119 systemd[1]: Reached target basic.target - Basic System. May 17 00:11:32.970135 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:11:32.970174 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:11:32.971251 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:11:32.973386 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:11:32.975111 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:11:32.978494 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:11:32.981855 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:11:32.983020 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:11:32.985545 jq[1439]: false May 17 00:11:32.985577 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:11:32.988506 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:11:32.991586 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:11:32.995582 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:11:33.007545 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:11:33.008062 dbus-daemon[1438]: [system] SELinux support is enabled May 17 00:11:33.009010 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:11:33.009143 extend-filesystems[1440]: Found loop3 May 17 00:11:33.010260 extend-filesystems[1440]: Found loop4 May 17 00:11:33.010260 extend-filesystems[1440]: Found loop5 May 17 00:11:33.010260 extend-filesystems[1440]: Found sr0 May 17 00:11:33.010260 extend-filesystems[1440]: Found vda May 17 00:11:33.010260 extend-filesystems[1440]: Found vda1 May 17 00:11:33.010260 extend-filesystems[1440]: Found vda2 May 17 00:11:33.010260 extend-filesystems[1440]: Found vda3 May 17 00:11:33.010260 extend-filesystems[1440]: Found usr May 17 00:11:33.010260 extend-filesystems[1440]: Found vda4 May 17 00:11:33.010260 extend-filesystems[1440]: Found vda6 May 17 00:11:33.010260 extend-filesystems[1440]: Found vda7 May 17 00:11:33.010260 extend-filesystems[1440]: Found vda9 May 17 00:11:33.010260 extend-filesystems[1440]: Checking size of /dev/vda9 May 17 00:11:33.010264 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:11:33.014549 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:11:33.023605 extend-filesystems[1440]: Resized partition /dev/vda9 May 17 00:11:33.026031 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) May 17 00:11:33.032897 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 17 00:11:33.029514 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:11:33.033615 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:11:33.035468 jq[1459]: true May 17 00:11:33.038449 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:11:33.046730 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1385) May 17 00:11:33.049008 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:11:33.051459 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:11:33.051936 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:11:33.052199 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:11:33.055855 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:11:33.058442 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 17 00:11:33.056530 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:11:33.065917 update_engine[1454]: I20250517 00:11:33.063387 1454 main.cc:92] Flatcar Update Engine starting May 17 00:11:33.076322 update_engine[1454]: I20250517 00:11:33.070338 1454 update_check_scheduler.cc:74] Next update check in 9m47s May 17 00:11:33.073505 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:11:33.076627 jq[1465]: true May 17 00:11:33.078379 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:11:33.079666 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:11:33.079666 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:11:33.079666 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 17 00:11:33.078423 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:11:33.089889 extend-filesystems[1440]: Resized filesystem in /dev/vda9 May 17 00:11:33.080861 systemd-logind[1451]: New seat seat0. May 17 00:11:33.081275 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:11:33.081593 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:11:33.091552 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:11:33.096932 dbus-daemon[1438]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:11:33.102288 tar[1464]: linux-amd64/helm May 17 00:11:33.107368 systemd[1]: Started update-engine.service - Update Engine. May 17 00:11:33.112781 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:11:33.112961 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:11:33.115564 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:11:33.115678 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:11:33.128053 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:11:33.136679 bash[1493]: Updated "/home/core/.ssh/authorized_keys" May 17 00:11:33.136508 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:11:33.138563 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 17 00:11:33.158777 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:11:33.217559 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:11:33.241642 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:11:33.254624 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:11:33.263631 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:11:33.263880 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:11:33.265276 containerd[1466]: time="2025-05-17T00:11:33.265182382Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:11:33.275608 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:11:33.287898 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:11:33.289239 containerd[1466]: time="2025-05-17T00:11:33.289204852Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:11:33.290971 containerd[1466]: time="2025-05-17T00:11:33.290926291Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:11:33.291010 containerd[1466]: time="2025-05-17T00:11:33.290971235Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:11:33.291010 containerd[1466]: time="2025-05-17T00:11:33.290992465Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:11:33.291257 containerd[1466]: time="2025-05-17T00:11:33.291197870Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:11:33.291257 containerd[1466]: time="2025-05-17T00:11:33.291220943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:11:33.291307 containerd[1466]: time="2025-05-17T00:11:33.291293299Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:11:33.291333 containerd[1466]: time="2025-05-17T00:11:33.291306564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:11:33.291547 containerd[1466]: time="2025-05-17T00:11:33.291520856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:11:33.291547 containerd[1466]: time="2025-05-17T00:11:33.291541414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:11:33.291599 containerd[1466]: time="2025-05-17T00:11:33.291556673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:11:33.291599 containerd[1466]: time="2025-05-17T00:11:33.291567573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:11:33.291687 containerd[1466]: time="2025-05-17T00:11:33.291667942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:11:33.291931 containerd[1466]: time="2025-05-17T00:11:33.291912040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:11:33.292097 containerd[1466]: time="2025-05-17T00:11:33.292040010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:11:33.292097 containerd[1466]: time="2025-05-17T00:11:33.292058334Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:11:33.292193 containerd[1466]: time="2025-05-17T00:11:33.292174973Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:11:33.292251 containerd[1466]: time="2025-05-17T00:11:33.292235777Z" level=info msg="metadata content store policy set" policy=shared May 17 00:11:33.297689 containerd[1466]: time="2025-05-17T00:11:33.297657977Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:11:33.297725 containerd[1466]: time="2025-05-17T00:11:33.297704144Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:11:33.297725 containerd[1466]: time="2025-05-17T00:11:33.297720074Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:11:33.297774 containerd[1466]: time="2025-05-17T00:11:33.297735553Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:11:33.297774 containerd[1466]: time="2025-05-17T00:11:33.297751322Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:11:33.297940 containerd[1466]: time="2025-05-17T00:11:33.297906674Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:11:33.298174 containerd[1466]: time="2025-05-17T00:11:33.298136204Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:11:33.298268 containerd[1466]: time="2025-05-17T00:11:33.298247373Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:11:33.298306 containerd[1466]: time="2025-05-17T00:11:33.298268553Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:11:33.298306 containerd[1466]: time="2025-05-17T00:11:33.298282248Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:11:33.298306 containerd[1466]: time="2025-05-17T00:11:33.298295483Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:11:33.298360 containerd[1466]: time="2025-05-17T00:11:33.298308207Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:11:33.298360 containerd[1466]: time="2025-05-17T00:11:33.298321161Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:11:33.298360 containerd[1466]: time="2025-05-17T00:11:33.298334196Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:11:33.298360 containerd[1466]: time="2025-05-17T00:11:33.298351548Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:11:33.298446 containerd[1466]: time="2025-05-17T00:11:33.298366316Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:11:33.298446 containerd[1466]: time="2025-05-17T00:11:33.298380282Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:11:33.298446 containerd[1466]: time="2025-05-17T00:11:33.298392595Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:11:33.298446 containerd[1466]: time="2025-05-17T00:11:33.298438291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298525 containerd[1466]: time="2025-05-17T00:11:33.298453499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298525 containerd[1466]: time="2025-05-17T00:11:33.298466143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298525 containerd[1466]: time="2025-05-17T00:11:33.298479438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298525 containerd[1466]: time="2025-05-17T00:11:33.298491972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298525 containerd[1466]: time="2025-05-17T00:11:33.298505507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298525 containerd[1466]: time="2025-05-17T00:11:33.298517099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298634 containerd[1466]: time="2025-05-17T00:11:33.298544731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298634 containerd[1466]: time="2025-05-17T00:11:33.298558266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298634 containerd[1466]: time="2025-05-17T00:11:33.298578033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298634 containerd[1466]: time="2025-05-17T00:11:33.298595095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298634 containerd[1466]: time="2025-05-17T00:11:33.298612037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298634 containerd[1466]: time="2025-05-17T00:11:33.298624901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298746 containerd[1466]: time="2025-05-17T00:11:33.298641292Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:11:33.298746 containerd[1466]: time="2025-05-17T00:11:33.298661740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298746 containerd[1466]: time="2025-05-17T00:11:33.298674664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298746 containerd[1466]: time="2025-05-17T00:11:33.298685925Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:11:33.298819 containerd[1466]: time="2025-05-17T00:11:33.298751709Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:11:33.298819 containerd[1466]: time="2025-05-17T00:11:33.298770424Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:11:33.298819 containerd[1466]: time="2025-05-17T00:11:33.298782066Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:11:33.298819 containerd[1466]: time="2025-05-17T00:11:33.298804969Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:11:33.298819 containerd[1466]: time="2025-05-17T00:11:33.298816801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:11:33.298908 containerd[1466]: time="2025-05-17T00:11:33.298831418Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:11:33.298908 containerd[1466]: time="2025-05-17T00:11:33.298843391Z" level=info msg="NRI interface is disabled by configuration." May 17 00:11:33.298908 containerd[1466]: time="2025-05-17T00:11:33.298854862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:11:33.299186 containerd[1466]: time="2025-05-17T00:11:33.299131090Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:11:33.299321 containerd[1466]: time="2025-05-17T00:11:33.299189300Z" level=info msg="Connect containerd service" May 17 00:11:33.299321 containerd[1466]: time="2025-05-17T00:11:33.299237280Z" level=info msg="using legacy CRI server" May 17 00:11:33.299321 containerd[1466]: time="2025-05-17T00:11:33.299244834Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:11:33.300586 containerd[1466]: time="2025-05-17T00:11:33.300502022Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:11:33.302087 containerd[1466]: time="2025-05-17T00:11:33.301615841Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:11:33.302087 containerd[1466]: time="2025-05-17T00:11:33.301760603Z" level=info msg="Start subscribing containerd event" May 17 00:11:33.302087 containerd[1466]: time="2025-05-17T00:11:33.301843578Z" level=info msg="Start recovering state" May 17 00:11:33.302087 containerd[1466]: time="2025-05-17T00:11:33.301926534Z" level=info msg="Start event monitor" May 17 00:11:33.302087 containerd[1466]: time="2025-05-17T00:11:33.301932285Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:11:33.302087 containerd[1466]: time="2025-05-17T00:11:33.301948124Z" level=info msg="Start snapshots syncer" May 17 00:11:33.302087 containerd[1466]: time="2025-05-17T00:11:33.301958925Z" level=info msg="Start cni network conf syncer for default" May 17 00:11:33.302087 containerd[1466]: time="2025-05-17T00:11:33.301966198Z" level=info msg="Start streaming server" May 17 00:11:33.302087 containerd[1466]: time="2025-05-17T00:11:33.301991115Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:11:33.302087 containerd[1466]: time="2025-05-17T00:11:33.302036420Z" level=info msg="containerd successfully booted in 0.037885s" May 17 00:11:33.302698 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:11:33.304925 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:11:33.306372 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:11:33.307771 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:11:33.465347 tar[1464]: linux-amd64/LICENSE May 17 00:11:33.465445 tar[1464]: linux-amd64/README.md May 17 00:11:33.485689 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:11:33.928543 systemd-networkd[1406]: eth0: Gained IPv6LL May 17 00:11:33.931909 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:11:33.933734 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:11:33.949649 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 17 00:11:33.952092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:11:33.954611 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:11:33.973333 systemd[1]: coreos-metadata.service: Deactivated successfully. May 17 00:11:33.973597 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 17 00:11:33.975414 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:11:33.977390 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:11:34.646851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:11:34.648518 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:11:34.649819 systemd[1]: Startup finished in 674ms (kernel) + 6.215s (initrd) + 3.806s (userspace) = 10.696s. May 17 00:11:34.670723 (kubelet)[1551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:11:35.071460 kubelet[1551]: E0517 00:11:35.071313 1551 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:11:35.075634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:11:35.075841 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:11:37.681510 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:11:37.682699 systemd[1]: Started sshd@0-10.0.0.20:22-10.0.0.1:54754.service - OpenSSH per-connection server daemon (10.0.0.1:54754). May 17 00:11:37.726504 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 54754 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:11:37.728655 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:11:37.737204 systemd-logind[1451]: New session 1 of user core. May 17 00:11:37.738480 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:11:37.750605 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:11:37.761659 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:11:37.764425 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:11:37.772690 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:11:37.885361 systemd[1569]: Queued start job for default target default.target. May 17 00:11:37.897693 systemd[1569]: Created slice app.slice - User Application Slice. May 17 00:11:37.897721 systemd[1569]: Reached target paths.target - Paths. May 17 00:11:37.897736 systemd[1569]: Reached target timers.target - Timers. May 17 00:11:37.899220 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:11:37.911570 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:11:37.911730 systemd[1569]: Reached target sockets.target - Sockets. May 17 00:11:37.911753 systemd[1569]: Reached target basic.target - Basic System. May 17 00:11:37.911798 systemd[1569]: Reached target default.target - Main User Target. May 17 00:11:37.911840 systemd[1569]: Startup finished in 132ms. May 17 00:11:37.912311 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:11:37.914064 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:11:37.983182 systemd[1]: Started sshd@1-10.0.0.20:22-10.0.0.1:54764.service - OpenSSH per-connection server daemon (10.0.0.1:54764). May 17 00:11:38.022587 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 54764 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:11:38.024203 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:11:38.028479 systemd-logind[1451]: New session 2 of user core. May 17 00:11:38.042522 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:11:38.096250 sshd[1580]: pam_unix(sshd:session): session closed for user core May 17 00:11:38.112250 systemd[1]: sshd@1-10.0.0.20:22-10.0.0.1:54764.service: Deactivated successfully. May 17 00:11:38.114007 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:11:38.115656 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. May 17 00:11:38.124791 systemd[1]: Started sshd@2-10.0.0.20:22-10.0.0.1:45124.service - OpenSSH per-connection server daemon (10.0.0.1:45124). May 17 00:11:38.125788 systemd-logind[1451]: Removed session 2. May 17 00:11:38.152203 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 45124 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:11:38.153725 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:11:38.157675 systemd-logind[1451]: New session 3 of user core. May 17 00:11:38.167536 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:11:38.217765 sshd[1587]: pam_unix(sshd:session): session closed for user core May 17 00:11:38.230474 systemd[1]: sshd@2-10.0.0.20:22-10.0.0.1:45124.service: Deactivated successfully. May 17 00:11:38.232277 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:11:38.233901 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. May 17 00:11:38.235620 systemd[1]: Started sshd@3-10.0.0.20:22-10.0.0.1:45138.service - OpenSSH per-connection server daemon (10.0.0.1:45138). May 17 00:11:38.236542 systemd-logind[1451]: Removed session 3. May 17 00:11:38.268949 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 45138 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:11:38.270671 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:11:38.274918 systemd-logind[1451]: New session 4 of user core. May 17 00:11:38.290532 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:11:38.345208 sshd[1594]: pam_unix(sshd:session): session closed for user core May 17 00:11:38.365359 systemd[1]: sshd@3-10.0.0.20:22-10.0.0.1:45138.service: Deactivated successfully. May 17 00:11:38.367120 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:11:38.368774 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. May 17 00:11:38.377637 systemd[1]: Started sshd@4-10.0.0.20:22-10.0.0.1:45142.service - OpenSSH per-connection server daemon (10.0.0.1:45142). May 17 00:11:38.378512 systemd-logind[1451]: Removed session 4. May 17 00:11:38.406563 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 45142 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:11:38.408026 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:11:38.412057 systemd-logind[1451]: New session 5 of user core. May 17 00:11:38.422609 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:11:38.481204 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:11:38.481568 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:11:38.502505 sudo[1604]: pam_unix(sudo:session): session closed for user root May 17 00:11:38.504784 sshd[1601]: pam_unix(sshd:session): session closed for user core May 17 00:11:38.517124 systemd[1]: sshd@4-10.0.0.20:22-10.0.0.1:45142.service: Deactivated successfully. May 17 00:11:38.518841 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:11:38.520518 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. May 17 00:11:38.521798 systemd[1]: Started sshd@5-10.0.0.20:22-10.0.0.1:45144.service - OpenSSH per-connection server daemon (10.0.0.1:45144). May 17 00:11:38.522583 systemd-logind[1451]: Removed session 5. May 17 00:11:38.556629 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 45144 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:11:38.558305 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:11:38.561947 systemd-logind[1451]: New session 6 of user core. May 17 00:11:38.571512 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:11:38.625792 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:11:38.626137 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:11:38.630105 sudo[1613]: pam_unix(sudo:session): session closed for user root May 17 00:11:38.636621 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:11:38.636957 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:11:38.654611 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:11:38.656723 auditctl[1616]: No rules May 17 00:11:38.658018 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:11:38.658266 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:11:38.659928 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:11:38.692042 augenrules[1634]: No rules May 17 00:11:38.693883 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:11:38.695528 sudo[1612]: pam_unix(sudo:session): session closed for user root May 17 00:11:38.697520 sshd[1609]: pam_unix(sshd:session): session closed for user core May 17 00:11:38.705269 systemd[1]: sshd@5-10.0.0.20:22-10.0.0.1:45144.service: Deactivated successfully. May 17 00:11:38.707058 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:11:38.708710 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. May 17 00:11:38.709956 systemd[1]: Started sshd@6-10.0.0.20:22-10.0.0.1:45160.service - OpenSSH per-connection server daemon (10.0.0.1:45160). May 17 00:11:38.710743 systemd-logind[1451]: Removed session 6. May 17 00:11:38.745927 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 45160 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:11:38.747443 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:11:38.751417 systemd-logind[1451]: New session 7 of user core. May 17 00:11:38.769680 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:11:38.823285 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:11:38.823703 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:11:39.109623 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:11:39.109811 (dockerd)[1662]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:11:39.400535 dockerd[1662]: time="2025-05-17T00:11:39.400340936Z" level=info msg="Starting up" May 17 00:11:39.760102 dockerd[1662]: time="2025-05-17T00:11:39.759962393Z" level=info msg="Loading containers: start." May 17 00:11:39.864433 kernel: Initializing XFRM netlink socket May 17 00:11:39.941678 systemd-networkd[1406]: docker0: Link UP May 17 00:11:39.963813 dockerd[1662]: time="2025-05-17T00:11:39.963772042Z" level=info msg="Loading containers: done." May 17 00:11:39.978258 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck47570103-merged.mount: Deactivated successfully. May 17 00:11:39.978950 dockerd[1662]: time="2025-05-17T00:11:39.978906360Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:11:39.979079 dockerd[1662]: time="2025-05-17T00:11:39.979050430Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:11:39.979245 dockerd[1662]: time="2025-05-17T00:11:39.979219757Z" level=info msg="Daemon has completed initialization" May 17 00:11:40.016726 dockerd[1662]: time="2025-05-17T00:11:40.016544608Z" level=info msg="API listen on /run/docker.sock" May 17 00:11:40.016762 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:11:40.732238 containerd[1466]: time="2025-05-17T00:11:40.732178312Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:11:41.392854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131684303.mount: Deactivated successfully. May 17 00:11:42.250805 containerd[1466]: time="2025-05-17T00:11:42.250750760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:42.251476 containerd[1466]: time="2025-05-17T00:11:42.251446415Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 17 00:11:42.252514 containerd[1466]: time="2025-05-17T00:11:42.252487157Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:42.254961 containerd[1466]: time="2025-05-17T00:11:42.254936191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:42.255982 containerd[1466]: time="2025-05-17T00:11:42.255945073Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 1.523714153s" May 17 00:11:42.256024 containerd[1466]: time="2025-05-17T00:11:42.255980480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:11:42.256871 containerd[1466]: time="2025-05-17T00:11:42.256829182Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:11:43.349971 containerd[1466]: time="2025-05-17T00:11:43.349900691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:43.350730 containerd[1466]: time="2025-05-17T00:11:43.350689150Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 17 00:11:43.352064 containerd[1466]: time="2025-05-17T00:11:43.352015238Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:43.354883 containerd[1466]: time="2025-05-17T00:11:43.354840076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:43.355772 containerd[1466]: time="2025-05-17T00:11:43.355722181Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.098840791s" May 17 00:11:43.355772 containerd[1466]: time="2025-05-17T00:11:43.355767566Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:11:43.356251 containerd[1466]: time="2025-05-17T00:11:43.356222569Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:11:44.658364 containerd[1466]: time="2025-05-17T00:11:44.658307148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:44.659133 containerd[1466]: time="2025-05-17T00:11:44.659085358Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 17 00:11:44.660409 containerd[1466]: time="2025-05-17T00:11:44.660347235Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:44.663242 containerd[1466]: time="2025-05-17T00:11:44.663192892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:44.664228 containerd[1466]: time="2025-05-17T00:11:44.664188169Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.307936335s" May 17 00:11:44.664228 containerd[1466]: time="2025-05-17T00:11:44.664224738Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:11:44.664758 containerd[1466]: time="2025-05-17T00:11:44.664733853Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:11:45.326097 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:11:45.336587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:11:45.507963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:11:45.512111 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:11:45.632445 kubelet[1881]: E0517 00:11:45.631554 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:11:45.638202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:11:45.638416 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:11:46.251085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986234838.mount: Deactivated successfully. May 17 00:11:47.363083 containerd[1466]: time="2025-05-17T00:11:47.363004497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:47.364258 containerd[1466]: time="2025-05-17T00:11:47.364201182Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 17 00:11:47.365786 containerd[1466]: time="2025-05-17T00:11:47.365748274Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:47.368720 containerd[1466]: time="2025-05-17T00:11:47.368682668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:47.369332 containerd[1466]: time="2025-05-17T00:11:47.369291099Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 2.704529013s" May 17 00:11:47.369332 containerd[1466]: time="2025-05-17T00:11:47.369322768Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:11:47.370001 containerd[1466]: time="2025-05-17T00:11:47.369947129Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:11:48.113139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2916900812.mount: Deactivated successfully. May 17 00:11:48.825105 containerd[1466]: time="2025-05-17T00:11:48.825047081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:48.825883 containerd[1466]: time="2025-05-17T00:11:48.825835530Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:11:48.827148 containerd[1466]: time="2025-05-17T00:11:48.827092648Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:48.830535 containerd[1466]: time="2025-05-17T00:11:48.830486905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:48.831526 containerd[1466]: time="2025-05-17T00:11:48.831495697Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.461497592s" May 17 00:11:48.831566 containerd[1466]: time="2025-05-17T00:11:48.831529160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:11:48.832052 containerd[1466]: time="2025-05-17T00:11:48.832023006Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:11:49.342384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3877740875.mount: Deactivated successfully. May 17 00:11:49.348489 containerd[1466]: time="2025-05-17T00:11:49.348421211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:49.349151 containerd[1466]: time="2025-05-17T00:11:49.349117087Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:11:49.350221 containerd[1466]: time="2025-05-17T00:11:49.350195379Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:49.352307 containerd[1466]: time="2025-05-17T00:11:49.352274218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:49.353004 containerd[1466]: time="2025-05-17T00:11:49.352968952Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 520.919015ms" May 17 00:11:49.353004 containerd[1466]: time="2025-05-17T00:11:49.352995131Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:11:49.353435 containerd[1466]: time="2025-05-17T00:11:49.353416772Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:11:49.834709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3442563783.mount: Deactivated successfully. May 17 00:11:53.150476 containerd[1466]: time="2025-05-17T00:11:53.150389447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:53.163183 containerd[1466]: time="2025-05-17T00:11:53.163139142Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 17 00:11:53.186503 containerd[1466]: time="2025-05-17T00:11:53.186448734Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:53.211139 containerd[1466]: time="2025-05-17T00:11:53.211071008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:11:53.212311 containerd[1466]: time="2025-05-17T00:11:53.212279214Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.858840212s" May 17 00:11:53.212381 containerd[1466]: time="2025-05-17T00:11:53.212321654Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:11:55.519676 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:11:55.528638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:11:55.556032 systemd[1]: Reloading requested from client PID 2038 ('systemctl') (unit session-7.scope)... May 17 00:11:55.556048 systemd[1]: Reloading... May 17 00:11:55.641427 zram_generator::config[2080]: No configuration found. May 17 00:11:55.916768 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:11:55.996346 systemd[1]: Reloading finished in 439 ms. May 17 00:11:56.054283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:11:56.057267 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:11:56.060096 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:11:56.060383 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:11:56.069832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:11:56.230279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:11:56.235693 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:11:56.275508 kubelet[2127]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:11:56.275508 kubelet[2127]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:11:56.275508 kubelet[2127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:11:56.275908 kubelet[2127]: I0517 00:11:56.275563 2127 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:11:56.410524 kubelet[2127]: I0517 00:11:56.410480 2127 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:11:56.410524 kubelet[2127]: I0517 00:11:56.410511 2127 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:11:56.410767 kubelet[2127]: I0517 00:11:56.410750 2127 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:11:56.432999 kubelet[2127]: E0517 00:11:56.432947 2127 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:11:56.434178 kubelet[2127]: I0517 00:11:56.434156 2127 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:11:56.439791 kubelet[2127]: E0517 00:11:56.439758 2127 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:11:56.439791 kubelet[2127]: I0517 00:11:56.439785 2127 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:11:56.445358 kubelet[2127]: I0517 00:11:56.445334 2127 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:11:56.445962 kubelet[2127]: I0517 00:11:56.445927 2127 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:11:56.446125 kubelet[2127]: I0517 00:11:56.446083 2127 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:11:56.446296 kubelet[2127]: I0517 00:11:56.446112 2127 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:11:56.446425 kubelet[2127]: I0517 00:11:56.446300 2127 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:11:56.446425 kubelet[2127]: I0517 00:11:56.446312 2127 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:11:56.446490 kubelet[2127]: I0517 00:11:56.446462 2127 state_mem.go:36] "Initialized new in-memory state store" May 17 00:11:56.448306 kubelet[2127]: I0517 00:11:56.448279 2127 kubelet.go:408] "Attempting to sync node with API server" May 17 00:11:56.448306 kubelet[2127]: I0517 00:11:56.448302 2127 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:11:56.448381 kubelet[2127]: I0517 00:11:56.448340 2127 kubelet.go:314] "Adding apiserver pod source" May 17 00:11:56.448381 kubelet[2127]: I0517 00:11:56.448362 2127 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:11:56.452970 kubelet[2127]: W0517 00:11:56.452898 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 17 00:11:56.452970 kubelet[2127]: E0517 00:11:56.452954 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:11:56.453532 kubelet[2127]: I0517 00:11:56.453109 2127 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:11:56.453532 kubelet[2127]: W0517 00:11:56.453456 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 17 00:11:56.453532 kubelet[2127]: E0517 00:11:56.453502 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:11:56.453655 kubelet[2127]: I0517 00:11:56.453606 2127 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:11:56.454098 kubelet[2127]: W0517 00:11:56.454080 2127 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:11:56.456928 kubelet[2127]: I0517 00:11:56.456909 2127 server.go:1274] "Started kubelet" May 17 00:11:56.457325 kubelet[2127]: I0517 00:11:56.457281 2127 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:11:56.457727 kubelet[2127]: I0517 00:11:56.457707 2127 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:11:56.458381 kubelet[2127]: I0517 00:11:56.457794 2127 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:11:56.458550 kubelet[2127]: I0517 00:11:56.458534 2127 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:11:56.459898 kubelet[2127]: I0517 00:11:56.458851 2127 server.go:449] "Adding debug handlers to kubelet server" May 17 00:11:56.459898 kubelet[2127]: I0517 00:11:56.459456 2127 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:11:56.461119 kubelet[2127]: E0517 00:11:56.459825 2127 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.20:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.20:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18402812c27762f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 00:11:56.456878836 +0000 UTC m=+0.216811487,LastTimestamp:2025-05-17 00:11:56.456878836 +0000 UTC m=+0.216811487,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 00:11:56.461823 kubelet[2127]: I0517 00:11:56.461761 2127 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:11:56.462062 kubelet[2127]: I0517 00:11:56.461872 2127 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:11:56.462062 kubelet[2127]: I0517 00:11:56.461938 2127 reconciler.go:26] "Reconciler: start to sync state" May 17 00:11:56.462256 kubelet[2127]: E0517 00:11:56.462241 2127 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:11:56.462332 kubelet[2127]: W0517 00:11:56.462241 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 17 00:11:56.462428 kubelet[2127]: E0517 00:11:56.462413 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:11:56.462483 kubelet[2127]: E0517 00:11:56.462323 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:11:56.462581 kubelet[2127]: I0517 00:11:56.462546 2127 factory.go:221] Registration of the systemd container factory successfully May 17 00:11:56.462698 kubelet[2127]: I0517 00:11:56.462640 2127 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:11:56.462698 kubelet[2127]: E0517 00:11:56.462637 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="200ms" May 17 00:11:56.463789 kubelet[2127]: I0517 00:11:56.463774 2127 factory.go:221] Registration of the containerd container factory successfully May 17 00:11:56.476233 kubelet[2127]: I0517 00:11:56.476162 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:11:56.477775 kubelet[2127]: I0517 00:11:56.477751 2127 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:11:56.477775 kubelet[2127]: I0517 00:11:56.477773 2127 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:11:56.477831 kubelet[2127]: I0517 00:11:56.477790 2127 state_mem.go:36] "Initialized new in-memory state store" May 17 00:11:56.478220 kubelet[2127]: I0517 00:11:56.478198 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:11:56.478596 kubelet[2127]: I0517 00:11:56.478576 2127 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:11:56.478626 kubelet[2127]: I0517 00:11:56.478604 2127 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:11:56.478833 kubelet[2127]: E0517 00:11:56.478653 2127 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:11:56.480936 kubelet[2127]: W0517 00:11:56.480830 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 17 00:11:56.480936 kubelet[2127]: E0517 00:11:56.480883 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:11:56.563334 kubelet[2127]: E0517 00:11:56.563302 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:11:56.579478 kubelet[2127]: E0517 00:11:56.579448 2127 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:11:56.663092 kubelet[2127]: E0517 00:11:56.663047 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="400ms" May 17 00:11:56.664084 kubelet[2127]: E0517 00:11:56.664062 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:11:56.764364 kubelet[2127]: E0517 00:11:56.764263 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:11:56.780499 kubelet[2127]: E0517 00:11:56.780437 2127 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:11:56.853875 kubelet[2127]: I0517 00:11:56.853812 2127 policy_none.go:49] "None policy: Start" May 17 00:11:56.854753 kubelet[2127]: I0517 00:11:56.854711 2127 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:11:56.854801 kubelet[2127]: I0517 00:11:56.854758 2127 state_mem.go:35] "Initializing new in-memory state store" May 17 00:11:56.860614 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:11:56.872978 kubelet[2127]: E0517 00:11:56.864415 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:11:56.876517 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:11:56.879193 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:11:56.889204 kubelet[2127]: I0517 00:11:56.889172 2127 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:11:56.889393 kubelet[2127]: I0517 00:11:56.889373 2127 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:11:56.889443 kubelet[2127]: I0517 00:11:56.889388 2127 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:11:56.889994 kubelet[2127]: I0517 00:11:56.889598 2127 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:11:56.890605 kubelet[2127]: E0517 00:11:56.890585 2127 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 17 00:11:56.991320 kubelet[2127]: I0517 00:11:56.991281 2127 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:11:56.991658 kubelet[2127]: E0517 00:11:56.991629 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" May 17 00:11:57.064573 kubelet[2127]: E0517 00:11:57.064423 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="800ms" May 17 00:11:57.188364 systemd[1]: Created slice kubepods-burstable-pod337e0aaaf41c6483df6471a9dcca0ad6.slice - libcontainer container kubepods-burstable-pod337e0aaaf41c6483df6471a9dcca0ad6.slice. May 17 00:11:57.193212 kubelet[2127]: I0517 00:11:57.193181 2127 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:11:57.193555 kubelet[2127]: E0517 00:11:57.193516 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" May 17 00:11:57.209108 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 17 00:11:57.233485 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 17 00:11:57.265423 kubelet[2127]: I0517 00:11:57.265368 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/337e0aaaf41c6483df6471a9dcca0ad6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"337e0aaaf41c6483df6471a9dcca0ad6\") " pod="kube-system/kube-apiserver-localhost" May 17 00:11:57.265423 kubelet[2127]: I0517 00:11:57.265421 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/337e0aaaf41c6483df6471a9dcca0ad6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"337e0aaaf41c6483df6471a9dcca0ad6\") " pod="kube-system/kube-apiserver-localhost" May 17 00:11:57.265574 kubelet[2127]: I0517 00:11:57.265443 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 17 00:11:57.265574 kubelet[2127]: I0517 00:11:57.265465 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:11:57.265574 kubelet[2127]: I0517 00:11:57.265484 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:11:57.265574 kubelet[2127]: I0517 00:11:57.265502 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:11:57.265574 kubelet[2127]: I0517 00:11:57.265517 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:11:57.265676 kubelet[2127]: I0517 00:11:57.265532 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/337e0aaaf41c6483df6471a9dcca0ad6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"337e0aaaf41c6483df6471a9dcca0ad6\") " pod="kube-system/kube-apiserver-localhost" May 17 00:11:57.265676 kubelet[2127]: I0517 00:11:57.265546 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:11:57.285919 kubelet[2127]: W0517 00:11:57.285862 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 17 00:11:57.286282 kubelet[2127]: E0517 00:11:57.285919 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:11:57.508812 kubelet[2127]: E0517 00:11:57.508672 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:11:57.509468 containerd[1466]: time="2025-05-17T00:11:57.509420583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:337e0aaaf41c6483df6471a9dcca0ad6,Namespace:kube-system,Attempt:0,}" May 17 00:11:57.531986 kubelet[2127]: E0517 00:11:57.531917 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:11:57.534777 containerd[1466]: time="2025-05-17T00:11:57.534703076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 17 00:11:57.535834 kubelet[2127]: E0517 00:11:57.535798 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:11:57.536365 containerd[1466]: time="2025-05-17T00:11:57.536319178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 17 00:11:57.543728 kubelet[2127]: W0517 00:11:57.543646 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 17 00:11:57.543850 kubelet[2127]: E0517 00:11:57.543736 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:11:57.549552 kubelet[2127]: W0517 00:11:57.549502 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 17 00:11:57.549657 kubelet[2127]: E0517 00:11:57.549553 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:11:57.595408 kubelet[2127]: I0517 00:11:57.595360 2127 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:11:57.595740 kubelet[2127]: E0517 00:11:57.595694 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" May 17 00:11:57.865609 kubelet[2127]: E0517 00:11:57.865461 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="1.6s" May 17 00:11:57.865743 kubelet[2127]: W0517 00:11:57.865607 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 17 00:11:57.865743 kubelet[2127]: E0517 00:11:57.865672 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:11:58.397810 kubelet[2127]: I0517 00:11:58.397764 2127 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:11:58.398314 kubelet[2127]: E0517 00:11:58.398182 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" May 17 00:11:58.634382 kubelet[2127]: E0517 00:11:58.634331 2127 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:11:59.185548 kubelet[2127]: W0517 00:11:59.185497 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 17 00:11:59.185548 kubelet[2127]: E0517 00:11:59.185544 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:11:59.466533 kubelet[2127]: E0517 00:11:59.466367 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.20:6443: connect: connection refused" interval="3.2s" May 17 00:11:59.467667 kubelet[2127]: W0517 00:11:59.467634 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 17 00:11:59.467701 kubelet[2127]: E0517 00:11:59.467686 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:12:00.000473 kubelet[2127]: I0517 00:12:00.000431 2127 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:12:00.000823 kubelet[2127]: E0517 00:12:00.000778 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.20:6443/api/v1/nodes\": dial tcp 10.0.0.20:6443: connect: connection refused" node="localhost" May 17 00:12:00.162884 kubelet[2127]: W0517 00:12:00.162787 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 17 00:12:00.162884 kubelet[2127]: E0517 00:12:00.162868 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:12:00.797157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801895314.mount: Deactivated successfully. May 17 00:12:00.879521 kubelet[2127]: W0517 00:12:00.879488 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.20:6443: connect: connection refused May 17 00:12:00.879835 kubelet[2127]: E0517 00:12:00.879533 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.20:6443: connect: connection refused" logger="UnhandledError" May 17 00:12:01.017298 containerd[1466]: time="2025-05-17T00:12:01.017231023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:12:01.039319 containerd[1466]: time="2025-05-17T00:12:01.039254072Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:12:01.051219 containerd[1466]: time="2025-05-17T00:12:01.051085444Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:12:01.080563 containerd[1466]: time="2025-05-17T00:12:01.080497954Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:12:01.105359 containerd[1466]: time="2025-05-17T00:12:01.105282793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:12:01.175034 containerd[1466]: time="2025-05-17T00:12:01.174935111Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:12:01.176324 containerd[1466]: time="2025-05-17T00:12:01.176262701Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:12:01.243990 containerd[1466]: time="2025-05-17T00:12:01.243931809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:12:01.244801 containerd[1466]: time="2025-05-17T00:12:01.244757558Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.735252717s" May 17 00:12:01.245511 containerd[1466]: time="2025-05-17T00:12:01.245470806Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.709069664s" May 17 00:12:01.246135 containerd[1466]: time="2025-05-17T00:12:01.246105787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.711295369s" May 17 00:12:01.436526 containerd[1466]: time="2025-05-17T00:12:01.435986008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:12:01.436526 containerd[1466]: time="2025-05-17T00:12:01.436033767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:12:01.436526 containerd[1466]: time="2025-05-17T00:12:01.436052342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:01.436704 containerd[1466]: time="2025-05-17T00:12:01.436582597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:01.438485 containerd[1466]: time="2025-05-17T00:12:01.438339733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:12:01.438485 containerd[1466]: time="2025-05-17T00:12:01.438387783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:12:01.439161 containerd[1466]: time="2025-05-17T00:12:01.439122761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:01.440482 containerd[1466]: time="2025-05-17T00:12:01.439509357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:01.440726 containerd[1466]: time="2025-05-17T00:12:01.440634717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:12:01.440726 containerd[1466]: time="2025-05-17T00:12:01.440685403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:12:01.440726 containerd[1466]: time="2025-05-17T00:12:01.440697545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:01.440933 containerd[1466]: time="2025-05-17T00:12:01.440853508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:01.467532 systemd[1]: Started cri-containerd-ce3c15b511e260f0dca7762fcb17fd1a0a9e4226979701167dd6770888c976db.scope - libcontainer container ce3c15b511e260f0dca7762fcb17fd1a0a9e4226979701167dd6770888c976db. May 17 00:12:01.469122 systemd[1]: Started cri-containerd-fd719409f2a0a2959aad2c6f0d2f6143dfe23348087f4c17c89d535995e37f3f.scope - libcontainer container fd719409f2a0a2959aad2c6f0d2f6143dfe23348087f4c17c89d535995e37f3f. May 17 00:12:01.472969 systemd[1]: Started cri-containerd-557bacc6a2916c513da0502067bc66f935b52e6bd439e78a7a261bb0f7a73744.scope - libcontainer container 557bacc6a2916c513da0502067bc66f935b52e6bd439e78a7a261bb0f7a73744. May 17 00:12:01.514986 containerd[1466]: time="2025-05-17T00:12:01.514936056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce3c15b511e260f0dca7762fcb17fd1a0a9e4226979701167dd6770888c976db\"" May 17 00:12:01.519182 kubelet[2127]: E0517 00:12:01.519143 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:01.523258 containerd[1466]: time="2025-05-17T00:12:01.522978311Z" level=info msg="CreateContainer within sandbox \"ce3c15b511e260f0dca7762fcb17fd1a0a9e4226979701167dd6770888c976db\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:12:01.525776 containerd[1466]: time="2025-05-17T00:12:01.525737696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:337e0aaaf41c6483df6471a9dcca0ad6,Namespace:kube-system,Attempt:0,} returns sandbox id \"557bacc6a2916c513da0502067bc66f935b52e6bd439e78a7a261bb0f7a73744\"" May 17 00:12:01.526546 kubelet[2127]: E0517 00:12:01.526518 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:01.529072 containerd[1466]: time="2025-05-17T00:12:01.529022528Z" level=info msg="CreateContainer within sandbox \"557bacc6a2916c513da0502067bc66f935b52e6bd439e78a7a261bb0f7a73744\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:12:01.534239 containerd[1466]: time="2025-05-17T00:12:01.534206471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd719409f2a0a2959aad2c6f0d2f6143dfe23348087f4c17c89d535995e37f3f\"" May 17 00:12:01.535447 kubelet[2127]: E0517 00:12:01.535343 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:01.536985 containerd[1466]: time="2025-05-17T00:12:01.536895595Z" level=info msg="CreateContainer within sandbox \"fd719409f2a0a2959aad2c6f0d2f6143dfe23348087f4c17c89d535995e37f3f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:12:01.547258 containerd[1466]: time="2025-05-17T00:12:01.547131665Z" level=info msg="CreateContainer within sandbox \"ce3c15b511e260f0dca7762fcb17fd1a0a9e4226979701167dd6770888c976db\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7a12ee7238761b02f10053753e3b7612eb30569f857671fd9bafd20c77be2d44\"" May 17 00:12:01.548500 containerd[1466]: time="2025-05-17T00:12:01.547877484Z" level=info msg="StartContainer for \"7a12ee7238761b02f10053753e3b7612eb30569f857671fd9bafd20c77be2d44\"" May 17 00:12:01.558216 containerd[1466]: time="2025-05-17T00:12:01.558162626Z" level=info msg="CreateContainer within sandbox \"557bacc6a2916c513da0502067bc66f935b52e6bd439e78a7a261bb0f7a73744\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1cf95b1c932da409dbbc94632f3ef6d196ccdf39ba979a81fdc710146b5d86a9\"" May 17 00:12:01.558781 containerd[1466]: time="2025-05-17T00:12:01.558752433Z" level=info msg="StartContainer for \"1cf95b1c932da409dbbc94632f3ef6d196ccdf39ba979a81fdc710146b5d86a9\"" May 17 00:12:01.562132 containerd[1466]: time="2025-05-17T00:12:01.562092628Z" level=info msg="CreateContainer within sandbox \"fd719409f2a0a2959aad2c6f0d2f6143dfe23348087f4c17c89d535995e37f3f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"917b126626a6962ede366f8d53e489d5929a1c3d2ddeecdc19efd174ab23a98b\"" May 17 00:12:01.562845 containerd[1466]: time="2025-05-17T00:12:01.562804543Z" level=info msg="StartContainer for \"917b126626a6962ede366f8d53e489d5929a1c3d2ddeecdc19efd174ab23a98b\"" May 17 00:12:01.578709 systemd[1]: Started cri-containerd-7a12ee7238761b02f10053753e3b7612eb30569f857671fd9bafd20c77be2d44.scope - libcontainer container 7a12ee7238761b02f10053753e3b7612eb30569f857671fd9bafd20c77be2d44. May 17 00:12:01.607718 systemd[1]: Started cri-containerd-1cf95b1c932da409dbbc94632f3ef6d196ccdf39ba979a81fdc710146b5d86a9.scope - libcontainer container 1cf95b1c932da409dbbc94632f3ef6d196ccdf39ba979a81fdc710146b5d86a9. May 17 00:12:01.609812 systemd[1]: Started cri-containerd-917b126626a6962ede366f8d53e489d5929a1c3d2ddeecdc19efd174ab23a98b.scope - libcontainer container 917b126626a6962ede366f8d53e489d5929a1c3d2ddeecdc19efd174ab23a98b. May 17 00:12:01.648474 containerd[1466]: time="2025-05-17T00:12:01.648348249Z" level=info msg="StartContainer for \"7a12ee7238761b02f10053753e3b7612eb30569f857671fd9bafd20c77be2d44\" returns successfully" May 17 00:12:01.665805 containerd[1466]: time="2025-05-17T00:12:01.665736183Z" level=info msg="StartContainer for \"917b126626a6962ede366f8d53e489d5929a1c3d2ddeecdc19efd174ab23a98b\" returns successfully" May 17 00:12:01.671732 containerd[1466]: time="2025-05-17T00:12:01.671344533Z" level=info msg="StartContainer for \"1cf95b1c932da409dbbc94632f3ef6d196ccdf39ba979a81fdc710146b5d86a9\" returns successfully" May 17 00:12:02.496430 kubelet[2127]: E0517 00:12:02.496383 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:02.497651 kubelet[2127]: E0517 00:12:02.497612 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:02.502846 kubelet[2127]: E0517 00:12:02.502825 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:02.669530 kubelet[2127]: E0517 00:12:02.669486 2127 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 17 00:12:02.847951 kubelet[2127]: E0517 00:12:02.847901 2127 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 17 00:12:03.202903 kubelet[2127]: I0517 00:12:03.202783 2127 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:12:03.211708 kubelet[2127]: I0517 00:12:03.211665 2127 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 17 00:12:03.211708 kubelet[2127]: E0517 00:12:03.211708 2127 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 17 00:12:03.219804 kubelet[2127]: E0517 00:12:03.219768 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:12:03.320313 kubelet[2127]: E0517 00:12:03.320264 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:12:03.420924 kubelet[2127]: E0517 00:12:03.420862 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:12:03.504940 kubelet[2127]: E0517 00:12:03.504815 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:03.521049 kubelet[2127]: E0517 00:12:03.520990 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:12:03.621792 kubelet[2127]: E0517 00:12:03.621708 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:12:03.722281 kubelet[2127]: E0517 00:12:03.722237 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:12:03.823389 kubelet[2127]: E0517 00:12:03.823331 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:12:03.923502 kubelet[2127]: E0517 00:12:03.923454 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:12:04.024562 kubelet[2127]: E0517 00:12:04.024515 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:12:04.125350 kubelet[2127]: E0517 00:12:04.125192 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:12:04.225903 kubelet[2127]: E0517 00:12:04.225836 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:12:04.326390 kubelet[2127]: E0517 00:12:04.326333 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:12:04.426964 kubelet[2127]: E0517 00:12:04.426845 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:12:04.675021 systemd[1]: Reloading requested from client PID 2412 ('systemctl') (unit session-7.scope)... May 17 00:12:04.675038 systemd[1]: Reloading... May 17 00:12:04.752449 zram_generator::config[2454]: No configuration found. May 17 00:12:04.862432 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:12:04.953677 systemd[1]: Reloading finished in 278 ms. May 17 00:12:04.997372 kubelet[2127]: I0517 00:12:04.997319 2127 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:12:04.997531 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:12:05.012820 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:12:05.013061 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:12:05.029705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:12:05.207614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:12:05.207774 (kubelet)[2496]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:12:05.246656 kubelet[2496]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:12:05.246656 kubelet[2496]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:12:05.246656 kubelet[2496]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:12:05.247027 kubelet[2496]: I0517 00:12:05.246701 2496 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:12:05.255883 kubelet[2496]: I0517 00:12:05.255844 2496 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:12:05.255883 kubelet[2496]: I0517 00:12:05.255872 2496 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:12:05.256103 kubelet[2496]: I0517 00:12:05.256089 2496 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:12:05.257522 kubelet[2496]: I0517 00:12:05.257488 2496 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:12:05.259223 kubelet[2496]: I0517 00:12:05.259196 2496 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:12:05.262150 kubelet[2496]: E0517 00:12:05.262127 2496 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:12:05.262150 kubelet[2496]: I0517 00:12:05.262149 2496 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:12:05.269814 kubelet[2496]: I0517 00:12:05.269766 2496 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:12:05.269961 kubelet[2496]: I0517 00:12:05.269877 2496 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:12:05.270024 kubelet[2496]: I0517 00:12:05.269993 2496 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:12:05.270196 kubelet[2496]: I0517 00:12:05.270018 2496 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:12:05.270293 kubelet[2496]: I0517 00:12:05.270199 2496 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:12:05.270293 kubelet[2496]: I0517 00:12:05.270207 2496 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:12:05.270293 kubelet[2496]: I0517 00:12:05.270230 2496 state_mem.go:36] "Initialized new in-memory state store" May 17 00:12:05.270368 kubelet[2496]: I0517 00:12:05.270332 2496 kubelet.go:408] "Attempting to sync node with API server" May 17 00:12:05.270368 kubelet[2496]: I0517 00:12:05.270342 2496 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:12:05.270368 kubelet[2496]: I0517 00:12:05.270369 2496 kubelet.go:314] "Adding apiserver pod source" May 17 00:12:05.270441 kubelet[2496]: I0517 00:12:05.270378 2496 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:12:05.271294 kubelet[2496]: I0517 00:12:05.271275 2496 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:12:05.271731 kubelet[2496]: I0517 00:12:05.271705 2496 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:12:05.272213 kubelet[2496]: I0517 00:12:05.272101 2496 server.go:1274] "Started kubelet" May 17 00:12:05.272262 kubelet[2496]: I0517 00:12:05.272225 2496 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:12:05.272577 kubelet[2496]: I0517 00:12:05.272353 2496 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:12:05.272805 kubelet[2496]: I0517 00:12:05.272645 2496 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:12:05.274419 kubelet[2496]: I0517 00:12:05.274130 2496 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:12:05.277470 kubelet[2496]: I0517 00:12:05.277453 2496 server.go:449] "Adding debug handlers to kubelet server" May 17 00:12:05.282243 kubelet[2496]: I0517 00:12:05.281150 2496 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:12:05.282243 kubelet[2496]: I0517 00:12:05.281699 2496 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:12:05.282243 kubelet[2496]: I0517 00:12:05.281783 2496 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:12:05.282243 kubelet[2496]: I0517 00:12:05.281909 2496 reconciler.go:26] "Reconciler: start to sync state" May 17 00:12:05.283734 kubelet[2496]: I0517 00:12:05.283714 2496 factory.go:221] Registration of the systemd container factory successfully May 17 00:12:05.283922 kubelet[2496]: I0517 00:12:05.283901 2496 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:12:05.285415 kubelet[2496]: I0517 00:12:05.285320 2496 factory.go:221] Registration of the containerd container factory successfully May 17 00:12:05.286868 kubelet[2496]: E0517 00:12:05.286554 2496 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:12:05.291671 kubelet[2496]: I0517 00:12:05.291633 2496 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:12:05.293679 kubelet[2496]: I0517 00:12:05.293656 2496 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:12:05.293867 kubelet[2496]: I0517 00:12:05.293853 2496 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:12:05.293957 kubelet[2496]: I0517 00:12:05.293944 2496 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:12:05.294246 kubelet[2496]: E0517 00:12:05.294048 2496 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:12:05.319226 kubelet[2496]: I0517 00:12:05.319197 2496 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:12:05.319226 kubelet[2496]: I0517 00:12:05.319218 2496 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:12:05.319226 kubelet[2496]: I0517 00:12:05.319238 2496 state_mem.go:36] "Initialized new in-memory state store" May 17 00:12:05.319418 kubelet[2496]: I0517 00:12:05.319389 2496 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:12:05.319446 kubelet[2496]: I0517 00:12:05.319419 2496 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:12:05.319446 kubelet[2496]: I0517 00:12:05.319442 2496 policy_none.go:49] "None policy: Start" May 17 00:12:05.320718 kubelet[2496]: I0517 00:12:05.319909 2496 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:12:05.320718 kubelet[2496]: I0517 00:12:05.319927 2496 state_mem.go:35] "Initializing new in-memory state store" May 17 00:12:05.320718 kubelet[2496]: I0517 00:12:05.320040 2496 state_mem.go:75] "Updated machine memory state" May 17 00:12:05.323898 kubelet[2496]: I0517 00:12:05.323794 2496 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:12:05.323963 kubelet[2496]: I0517 00:12:05.323951 2496 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:12:05.323985 kubelet[2496]: I0517 00:12:05.323966 2496 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:12:05.324167 kubelet[2496]: I0517 00:12:05.324149 2496 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:12:05.429813 kubelet[2496]: I0517 00:12:05.429669 2496 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:12:05.434657 kubelet[2496]: I0517 00:12:05.434636 2496 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 17 00:12:05.434732 kubelet[2496]: I0517 00:12:05.434703 2496 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 17 00:12:05.483756 kubelet[2496]: I0517 00:12:05.483656 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:12:05.483756 kubelet[2496]: I0517 00:12:05.483687 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 17 00:12:05.483756 kubelet[2496]: I0517 00:12:05.483704 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/337e0aaaf41c6483df6471a9dcca0ad6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"337e0aaaf41c6483df6471a9dcca0ad6\") " pod="kube-system/kube-apiserver-localhost" May 17 00:12:05.483756 kubelet[2496]: I0517 00:12:05.483721 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/337e0aaaf41c6483df6471a9dcca0ad6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"337e0aaaf41c6483df6471a9dcca0ad6\") " pod="kube-system/kube-apiserver-localhost" May 17 00:12:05.483756 kubelet[2496]: I0517 00:12:05.483736 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/337e0aaaf41c6483df6471a9dcca0ad6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"337e0aaaf41c6483df6471a9dcca0ad6\") " pod="kube-system/kube-apiserver-localhost" May 17 00:12:05.484008 kubelet[2496]: I0517 00:12:05.483753 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:12:05.484008 kubelet[2496]: I0517 00:12:05.483769 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:12:05.484008 kubelet[2496]: I0517 00:12:05.483783 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:12:05.484008 kubelet[2496]: I0517 00:12:05.483799 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:12:05.674292 sudo[2535]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:12:05.674699 sudo[2535]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 17 00:12:05.706572 kubelet[2496]: E0517 00:12:05.706534 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:05.706715 kubelet[2496]: E0517 00:12:05.706680 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:05.706926 kubelet[2496]: E0517 00:12:05.706904 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:06.141315 sudo[2535]: pam_unix(sudo:session): session closed for user root May 17 00:12:06.271059 kubelet[2496]: I0517 00:12:06.271029 2496 apiserver.go:52] "Watching apiserver" May 17 00:12:06.282127 kubelet[2496]: I0517 00:12:06.282098 2496 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:12:06.305376 kubelet[2496]: E0517 00:12:06.304652 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:06.305511 kubelet[2496]: E0517 00:12:06.305494 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:06.310470 kubelet[2496]: E0517 00:12:06.310431 2496 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 00:12:06.310659 kubelet[2496]: E0517 00:12:06.310641 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:06.352485 kubelet[2496]: I0517 00:12:06.352017 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.351997572 podStartE2EDuration="1.351997572s" podCreationTimestamp="2025-05-17 00:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:12:06.343924876 +0000 UTC m=+1.132258016" watchObservedRunningTime="2025-05-17 00:12:06.351997572 +0000 UTC m=+1.140330712" May 17 00:12:06.361450 kubelet[2496]: I0517 00:12:06.361408 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.361380022 podStartE2EDuration="1.361380022s" podCreationTimestamp="2025-05-17 00:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:12:06.352566996 +0000 UTC m=+1.140900136" watchObservedRunningTime="2025-05-17 00:12:06.361380022 +0000 UTC m=+1.149713162" May 17 00:12:06.369663 kubelet[2496]: I0517 00:12:06.369609 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.369584609 podStartE2EDuration="1.369584609s" podCreationTimestamp="2025-05-17 00:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:12:06.362373532 +0000 UTC m=+1.150706672" watchObservedRunningTime="2025-05-17 00:12:06.369584609 +0000 UTC m=+1.157917749" May 17 00:12:07.305371 kubelet[2496]: E0517 00:12:07.305343 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:07.786748 sudo[1645]: pam_unix(sudo:session): session closed for user root May 17 00:12:07.789157 sshd[1642]: pam_unix(sshd:session): session closed for user core May 17 00:12:07.795901 systemd[1]: sshd@6-10.0.0.20:22-10.0.0.1:45160.service: Deactivated successfully. May 17 00:12:07.798512 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:12:07.798754 systemd[1]: session-7.scope: Consumed 4.536s CPU time, 158.8M memory peak, 0B memory swap peak. May 17 00:12:07.799394 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. May 17 00:12:07.800503 systemd-logind[1451]: Removed session 7. May 17 00:12:08.583732 kubelet[2496]: E0517 00:12:08.583671 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:09.841078 kubelet[2496]: I0517 00:12:09.841027 2496 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:12:09.841592 containerd[1466]: time="2025-05-17T00:12:09.841548723Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:12:09.842774 kubelet[2496]: I0517 00:12:09.842057 2496 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:12:10.474353 systemd[1]: Created slice kubepods-besteffort-pod85980a0d_675a_402f_84fc_93d7540b5a50.slice - libcontainer container kubepods-besteffort-pod85980a0d_675a_402f_84fc_93d7540b5a50.slice. May 17 00:12:10.491412 systemd[1]: Created slice kubepods-burstable-pod3a36152b_cd25_402b_8771_e96771121b3f.slice - libcontainer container kubepods-burstable-pod3a36152b_cd25_402b_8771_e96771121b3f.slice. May 17 00:12:10.517127 kubelet[2496]: I0517 00:12:10.517069 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-cilium-cgroup\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.517127 kubelet[2496]: I0517 00:12:10.517108 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a36152b-cd25-402b-8771-e96771121b3f-hubble-tls\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.517127 kubelet[2496]: I0517 00:12:10.517133 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85980a0d-675a-402f-84fc-93d7540b5a50-lib-modules\") pod \"kube-proxy-fgpsv\" (UID: \"85980a0d-675a-402f-84fc-93d7540b5a50\") " pod="kube-system/kube-proxy-fgpsv" May 17 00:12:10.517370 kubelet[2496]: I0517 00:12:10.517151 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a36152b-cd25-402b-8771-e96771121b3f-cilium-config-path\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.517370 kubelet[2496]: I0517 00:12:10.517204 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-lib-modules\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.517370 kubelet[2496]: I0517 00:12:10.517248 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-host-proc-sys-kernel\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.517370 kubelet[2496]: I0517 00:12:10.517272 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-cni-path\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.517370 kubelet[2496]: I0517 00:12:10.517296 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/85980a0d-675a-402f-84fc-93d7540b5a50-kube-proxy\") pod \"kube-proxy-fgpsv\" (UID: \"85980a0d-675a-402f-84fc-93d7540b5a50\") " pod="kube-system/kube-proxy-fgpsv" May 17 00:12:10.517370 kubelet[2496]: I0517 00:12:10.517314 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-hostproc\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.517579 kubelet[2496]: I0517 00:12:10.517369 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pg94\" (UniqueName: \"kubernetes.io/projected/85980a0d-675a-402f-84fc-93d7540b5a50-kube-api-access-4pg94\") pod \"kube-proxy-fgpsv\" (UID: \"85980a0d-675a-402f-84fc-93d7540b5a50\") " pod="kube-system/kube-proxy-fgpsv" May 17 00:12:10.517579 kubelet[2496]: I0517 00:12:10.517391 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzlg6\" (UniqueName: \"kubernetes.io/projected/3a36152b-cd25-402b-8771-e96771121b3f-kube-api-access-xzlg6\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.517579 kubelet[2496]: I0517 00:12:10.517432 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-cilium-run\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.517579 kubelet[2496]: I0517 00:12:10.517450 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-etc-cni-netd\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.517579 kubelet[2496]: I0517 00:12:10.517469 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-xtables-lock\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.517731 kubelet[2496]: I0517 00:12:10.517491 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a36152b-cd25-402b-8771-e96771121b3f-clustermesh-secrets\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.517731 kubelet[2496]: I0517 00:12:10.517511 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-host-proc-sys-net\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.517731 kubelet[2496]: I0517 00:12:10.517532 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85980a0d-675a-402f-84fc-93d7540b5a50-xtables-lock\") pod \"kube-proxy-fgpsv\" (UID: \"85980a0d-675a-402f-84fc-93d7540b5a50\") " pod="kube-system/kube-proxy-fgpsv" May 17 00:12:10.517731 kubelet[2496]: I0517 00:12:10.517561 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-bpf-maps\") pod \"cilium-qcscv\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " pod="kube-system/cilium-qcscv" May 17 00:12:10.788794 kubelet[2496]: E0517 00:12:10.788755 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:10.790013 containerd[1466]: time="2025-05-17T00:12:10.789964547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fgpsv,Uid:85980a0d-675a-402f-84fc-93d7540b5a50,Namespace:kube-system,Attempt:0,}" May 17 00:12:10.794813 kubelet[2496]: E0517 00:12:10.794776 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:10.795341 containerd[1466]: time="2025-05-17T00:12:10.795309737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qcscv,Uid:3a36152b-cd25-402b-8771-e96771121b3f,Namespace:kube-system,Attempt:0,}" May 17 00:12:10.994452 systemd[1]: Created slice kubepods-besteffort-pod646684e3_f939_4882_898c_878848df2573.slice - libcontainer container kubepods-besteffort-pod646684e3_f939_4882_898c_878848df2573.slice. May 17 00:12:11.016475 containerd[1466]: time="2025-05-17T00:12:11.016201402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:12:11.016475 containerd[1466]: time="2025-05-17T00:12:11.016260704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:12:11.016475 containerd[1466]: time="2025-05-17T00:12:11.016271145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:11.016475 containerd[1466]: time="2025-05-17T00:12:11.016364422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:11.016966 containerd[1466]: time="2025-05-17T00:12:11.016076336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:12:11.016966 containerd[1466]: time="2025-05-17T00:12:11.016140236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:12:11.016966 containerd[1466]: time="2025-05-17T00:12:11.016153912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:11.016966 containerd[1466]: time="2025-05-17T00:12:11.016238543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:11.022416 kubelet[2496]: I0517 00:12:11.021615 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/646684e3-f939-4882-898c-878848df2573-cilium-config-path\") pod \"cilium-operator-5d85765b45-stjfz\" (UID: \"646684e3-f939-4882-898c-878848df2573\") " pod="kube-system/cilium-operator-5d85765b45-stjfz" May 17 00:12:11.022416 kubelet[2496]: I0517 00:12:11.021664 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmh5n\" (UniqueName: \"kubernetes.io/projected/646684e3-f939-4882-898c-878848df2573-kube-api-access-jmh5n\") pod \"cilium-operator-5d85765b45-stjfz\" (UID: \"646684e3-f939-4882-898c-878848df2573\") " pod="kube-system/cilium-operator-5d85765b45-stjfz" May 17 00:12:11.033529 systemd[1]: Started cri-containerd-7886e29888432a2841be29b949247e8f12c3f6f30b0f06092546fc61515a9a70.scope - libcontainer container 7886e29888432a2841be29b949247e8f12c3f6f30b0f06092546fc61515a9a70. May 17 00:12:11.037344 systemd[1]: Started cri-containerd-79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1.scope - libcontainer container 79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1. May 17 00:12:11.061971 containerd[1466]: time="2025-05-17T00:12:11.061816140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qcscv,Uid:3a36152b-cd25-402b-8771-e96771121b3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\"" May 17 00:12:11.062326 containerd[1466]: time="2025-05-17T00:12:11.062219544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fgpsv,Uid:85980a0d-675a-402f-84fc-93d7540b5a50,Namespace:kube-system,Attempt:0,} returns sandbox id \"7886e29888432a2841be29b949247e8f12c3f6f30b0f06092546fc61515a9a70\"" May 17 00:12:11.062719 kubelet[2496]: E0517 00:12:11.062697 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:11.064058 kubelet[2496]: E0517 00:12:11.063677 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:11.064135 containerd[1466]: time="2025-05-17T00:12:11.063807745Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:12:11.065929 containerd[1466]: time="2025-05-17T00:12:11.065893149Z" level=info msg="CreateContainer within sandbox \"7886e29888432a2841be29b949247e8f12c3f6f30b0f06092546fc61515a9a70\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:12:11.086042 containerd[1466]: time="2025-05-17T00:12:11.085988326Z" level=info msg="CreateContainer within sandbox \"7886e29888432a2841be29b949247e8f12c3f6f30b0f06092546fc61515a9a70\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e7c2ea51f0f62344e11fc96f7b36a51d9f5cff185ee0586496e098bc1ffb7ba0\"" May 17 00:12:11.086566 containerd[1466]: time="2025-05-17T00:12:11.086537908Z" level=info msg="StartContainer for \"e7c2ea51f0f62344e11fc96f7b36a51d9f5cff185ee0586496e098bc1ffb7ba0\"" May 17 00:12:11.116539 systemd[1]: Started cri-containerd-e7c2ea51f0f62344e11fc96f7b36a51d9f5cff185ee0586496e098bc1ffb7ba0.scope - libcontainer container e7c2ea51f0f62344e11fc96f7b36a51d9f5cff185ee0586496e098bc1ffb7ba0. May 17 00:12:11.150856 containerd[1466]: time="2025-05-17T00:12:11.150703704Z" level=info msg="StartContainer for \"e7c2ea51f0f62344e11fc96f7b36a51d9f5cff185ee0586496e098bc1ffb7ba0\" returns successfully" May 17 00:12:11.299103 kubelet[2496]: E0517 00:12:11.299047 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:11.301255 containerd[1466]: time="2025-05-17T00:12:11.300853551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-stjfz,Uid:646684e3-f939-4882-898c-878848df2573,Namespace:kube-system,Attempt:0,}" May 17 00:12:11.313200 kubelet[2496]: E0517 00:12:11.313051 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:11.321197 kubelet[2496]: I0517 00:12:11.320759 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fgpsv" podStartSLOduration=1.320740293 podStartE2EDuration="1.320740293s" podCreationTimestamp="2025-05-17 00:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:12:11.320462436 +0000 UTC m=+6.108795576" watchObservedRunningTime="2025-05-17 00:12:11.320740293 +0000 UTC m=+6.109073433" May 17 00:12:11.328455 kubelet[2496]: E0517 00:12:11.327761 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:11.330422 containerd[1466]: time="2025-05-17T00:12:11.330165391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:12:11.330422 containerd[1466]: time="2025-05-17T00:12:11.330251394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:12:11.330422 containerd[1466]: time="2025-05-17T00:12:11.330275621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:11.330570 containerd[1466]: time="2025-05-17T00:12:11.330526987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:11.353583 systemd[1]: Started cri-containerd-7dfc19f1987cec1abc7c0b255a5bae2686f162aa75f0b5f3d9a6c20fd956bec7.scope - libcontainer container 7dfc19f1987cec1abc7c0b255a5bae2686f162aa75f0b5f3d9a6c20fd956bec7. May 17 00:12:11.389556 containerd[1466]: time="2025-05-17T00:12:11.389503214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-stjfz,Uid:646684e3-f939-4882-898c-878848df2573,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dfc19f1987cec1abc7c0b255a5bae2686f162aa75f0b5f3d9a6c20fd956bec7\"" May 17 00:12:11.390458 kubelet[2496]: E0517 00:12:11.390244 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:12.316371 kubelet[2496]: E0517 00:12:12.316345 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:14.855001 kubelet[2496]: E0517 00:12:14.854958 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:15.321091 kubelet[2496]: E0517 00:12:15.321035 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:17.960256 update_engine[1454]: I20250517 00:12:17.960162 1454 update_attempter.cc:509] Updating boot flags... May 17 00:12:18.017833 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2882) May 17 00:12:18.114506 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2885) May 17 00:12:18.150435 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2885) May 17 00:12:18.590078 kubelet[2496]: E0517 00:12:18.590045 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:19.584573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3788740414.mount: Deactivated successfully. May 17 00:12:22.066554 containerd[1466]: time="2025-05-17T00:12:22.066502088Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:12:22.067459 containerd[1466]: time="2025-05-17T00:12:22.067415762Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 17 00:12:22.068759 containerd[1466]: time="2025-05-17T00:12:22.068725110Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:12:22.070228 containerd[1466]: time="2025-05-17T00:12:22.070198418Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.006363631s" May 17 00:12:22.070285 containerd[1466]: time="2025-05-17T00:12:22.070228504Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:12:22.077976 containerd[1466]: time="2025-05-17T00:12:22.077950676Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:12:22.092141 containerd[1466]: time="2025-05-17T00:12:22.092101969Z" level=info msg="CreateContainer within sandbox \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:12:22.113466 containerd[1466]: time="2025-05-17T00:12:22.113418944Z" level=info msg="CreateContainer within sandbox \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a\"" May 17 00:12:22.116744 containerd[1466]: time="2025-05-17T00:12:22.116715571Z" level=info msg="StartContainer for \"5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a\"" May 17 00:12:22.147529 systemd[1]: Started cri-containerd-5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a.scope - libcontainer container 5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a. May 17 00:12:22.175928 containerd[1466]: time="2025-05-17T00:12:22.175885341Z" level=info msg="StartContainer for \"5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a\" returns successfully" May 17 00:12:22.187935 systemd[1]: cri-containerd-5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a.scope: Deactivated successfully. May 17 00:12:22.635492 containerd[1466]: time="2025-05-17T00:12:22.630641458Z" level=info msg="shim disconnected" id=5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a namespace=k8s.io May 17 00:12:22.635492 containerd[1466]: time="2025-05-17T00:12:22.635480433Z" level=warning msg="cleaning up after shim disconnected" id=5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a namespace=k8s.io May 17 00:12:22.635492 containerd[1466]: time="2025-05-17T00:12:22.635494840Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:12:22.638580 kubelet[2496]: E0517 00:12:22.638494 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:23.102641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a-rootfs.mount: Deactivated successfully. May 17 00:12:23.640931 kubelet[2496]: E0517 00:12:23.640802 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:23.644097 containerd[1466]: time="2025-05-17T00:12:23.644055974Z" level=info msg="CreateContainer within sandbox \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:12:23.661962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3808306182.mount: Deactivated successfully. May 17 00:12:23.662861 containerd[1466]: time="2025-05-17T00:12:23.662827999Z" level=info msg="CreateContainer within sandbox \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3\"" May 17 00:12:23.664100 containerd[1466]: time="2025-05-17T00:12:23.663329925Z" level=info msg="StartContainer for \"d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3\"" May 17 00:12:23.693576 systemd[1]: Started cri-containerd-d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3.scope - libcontainer container d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3. May 17 00:12:23.725353 containerd[1466]: time="2025-05-17T00:12:23.725313663Z" level=info msg="StartContainer for \"d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3\" returns successfully" May 17 00:12:23.737517 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:12:23.737994 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:12:23.738068 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 17 00:12:23.742798 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:12:23.743257 systemd[1]: cri-containerd-d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3.scope: Deactivated successfully. May 17 00:12:23.761941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:12:23.772581 containerd[1466]: time="2025-05-17T00:12:23.772501061Z" level=info msg="shim disconnected" id=d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3 namespace=k8s.io May 17 00:12:23.772581 containerd[1466]: time="2025-05-17T00:12:23.772561795Z" level=warning msg="cleaning up after shim disconnected" id=d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3 namespace=k8s.io May 17 00:12:23.772581 containerd[1466]: time="2025-05-17T00:12:23.772571423Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:12:24.102867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3-rootfs.mount: Deactivated successfully. May 17 00:12:24.243774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860319709.mount: Deactivated successfully. May 17 00:12:24.534986 containerd[1466]: time="2025-05-17T00:12:24.534935567Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:12:24.535862 containerd[1466]: time="2025-05-17T00:12:24.535824192Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 17 00:12:24.537045 containerd[1466]: time="2025-05-17T00:12:24.536999206Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:12:24.538168 containerd[1466]: time="2025-05-17T00:12:24.538134956Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.460070195s" May 17 00:12:24.538203 containerd[1466]: time="2025-05-17T00:12:24.538170863Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:12:24.540568 containerd[1466]: time="2025-05-17T00:12:24.540543555Z" level=info msg="CreateContainer within sandbox \"7dfc19f1987cec1abc7c0b255a5bae2686f162aa75f0b5f3d9a6c20fd956bec7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:12:24.553612 containerd[1466]: time="2025-05-17T00:12:24.553567998Z" level=info msg="CreateContainer within sandbox \"7dfc19f1987cec1abc7c0b255a5bae2686f162aa75f0b5f3d9a6c20fd956bec7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\"" May 17 00:12:24.554065 containerd[1466]: time="2025-05-17T00:12:24.554035499Z" level=info msg="StartContainer for \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\"" May 17 00:12:24.586650 systemd[1]: Started cri-containerd-20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb.scope - libcontainer container 20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb. May 17 00:12:24.697482 containerd[1466]: time="2025-05-17T00:12:24.697435231Z" level=info msg="StartContainer for \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\" returns successfully" May 17 00:12:24.702879 kubelet[2496]: E0517 00:12:24.702696 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:24.709660 kubelet[2496]: E0517 00:12:24.709040 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:24.712868 containerd[1466]: time="2025-05-17T00:12:24.711630962Z" level=info msg="CreateContainer within sandbox \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:12:24.721595 kubelet[2496]: I0517 00:12:24.721509 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-stjfz" podStartSLOduration=1.573604614 podStartE2EDuration="14.721491193s" podCreationTimestamp="2025-05-17 00:12:10 +0000 UTC" firstStartedPulling="2025-05-17 00:12:11.390947564 +0000 UTC m=+6.179280704" lastFinishedPulling="2025-05-17 00:12:24.538834143 +0000 UTC m=+19.327167283" observedRunningTime="2025-05-17 00:12:24.720846217 +0000 UTC m=+19.509179357" watchObservedRunningTime="2025-05-17 00:12:24.721491193 +0000 UTC m=+19.509824333" May 17 00:12:24.735248 containerd[1466]: time="2025-05-17T00:12:24.735100057Z" level=info msg="CreateContainer within sandbox \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db\"" May 17 00:12:24.737956 containerd[1466]: time="2025-05-17T00:12:24.736150396Z" level=info msg="StartContainer for \"f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db\"" May 17 00:12:24.795590 systemd[1]: Started cri-containerd-f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db.scope - libcontainer container f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db. May 17 00:12:24.828988 systemd[1]: cri-containerd-f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db.scope: Deactivated successfully. May 17 00:12:24.884094 containerd[1466]: time="2025-05-17T00:12:24.884027797Z" level=info msg="StartContainer for \"f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db\" returns successfully" May 17 00:12:24.909568 containerd[1466]: time="2025-05-17T00:12:24.909497251Z" level=info msg="shim disconnected" id=f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db namespace=k8s.io May 17 00:12:24.909937 containerd[1466]: time="2025-05-17T00:12:24.909694423Z" level=warning msg="cleaning up after shim disconnected" id=f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db namespace=k8s.io May 17 00:12:24.909937 containerd[1466]: time="2025-05-17T00:12:24.909705915Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:12:25.712885 kubelet[2496]: E0517 00:12:25.712841 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:25.712885 kubelet[2496]: E0517 00:12:25.712873 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:25.714866 containerd[1466]: time="2025-05-17T00:12:25.714802954Z" level=info msg="CreateContainer within sandbox \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:12:25.995912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330990960.mount: Deactivated successfully. May 17 00:12:26.144114 containerd[1466]: time="2025-05-17T00:12:26.144060483Z" level=info msg="CreateContainer within sandbox \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b\"" May 17 00:12:26.144412 containerd[1466]: time="2025-05-17T00:12:26.144380315Z" level=info msg="StartContainer for \"7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b\"" May 17 00:12:26.189526 systemd[1]: Started cri-containerd-7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b.scope - libcontainer container 7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b. May 17 00:12:26.210937 systemd[1]: cri-containerd-7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b.scope: Deactivated successfully. May 17 00:12:26.280159 containerd[1466]: time="2025-05-17T00:12:26.280085425Z" level=info msg="StartContainer for \"7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b\" returns successfully" May 17 00:12:26.306516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b-rootfs.mount: Deactivated successfully. May 17 00:12:26.316075 containerd[1466]: time="2025-05-17T00:12:26.316017608Z" level=info msg="shim disconnected" id=7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b namespace=k8s.io May 17 00:12:26.316075 containerd[1466]: time="2025-05-17T00:12:26.316073112Z" level=warning msg="cleaning up after shim disconnected" id=7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b namespace=k8s.io May 17 00:12:26.316254 containerd[1466]: time="2025-05-17T00:12:26.316093652Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:12:26.718849 kubelet[2496]: E0517 00:12:26.718706 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:26.723038 containerd[1466]: time="2025-05-17T00:12:26.722704403Z" level=info msg="CreateContainer within sandbox \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:12:26.758146 containerd[1466]: time="2025-05-17T00:12:26.758095897Z" level=info msg="CreateContainer within sandbox \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\"" May 17 00:12:26.758603 containerd[1466]: time="2025-05-17T00:12:26.758576572Z" level=info msg="StartContainer for \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\"" May 17 00:12:26.788545 systemd[1]: Started cri-containerd-922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7.scope - libcontainer container 922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7. May 17 00:12:26.819842 containerd[1466]: time="2025-05-17T00:12:26.819792363Z" level=info msg="StartContainer for \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\" returns successfully" May 17 00:12:27.082513 kubelet[2496]: I0517 00:12:27.082466 2496 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:12:27.143610 systemd[1]: Created slice kubepods-burstable-pod3864a724_16bd_4820_9378_5dec85f6df08.slice - libcontainer container kubepods-burstable-pod3864a724_16bd_4820_9378_5dec85f6df08.slice. May 17 00:12:27.153837 systemd[1]: Created slice kubepods-burstable-pod8979875f_2ed9_4fe7_be7b_bde155fa4646.slice - libcontainer container kubepods-burstable-pod8979875f_2ed9_4fe7_be7b_bde155fa4646.slice. May 17 00:12:27.232113 kubelet[2496]: I0517 00:12:27.232039 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8979875f-2ed9-4fe7-be7b-bde155fa4646-config-volume\") pod \"coredns-7c65d6cfc9-mzsj5\" (UID: \"8979875f-2ed9-4fe7-be7b-bde155fa4646\") " pod="kube-system/coredns-7c65d6cfc9-mzsj5" May 17 00:12:27.232113 kubelet[2496]: I0517 00:12:27.232094 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8lgp\" (UniqueName: \"kubernetes.io/projected/8979875f-2ed9-4fe7-be7b-bde155fa4646-kube-api-access-c8lgp\") pod \"coredns-7c65d6cfc9-mzsj5\" (UID: \"8979875f-2ed9-4fe7-be7b-bde155fa4646\") " pod="kube-system/coredns-7c65d6cfc9-mzsj5" May 17 00:12:27.232113 kubelet[2496]: I0517 00:12:27.232124 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3864a724-16bd-4820-9378-5dec85f6df08-config-volume\") pod \"coredns-7c65d6cfc9-8s87x\" (UID: \"3864a724-16bd-4820-9378-5dec85f6df08\") " pod="kube-system/coredns-7c65d6cfc9-8s87x" May 17 00:12:27.232353 kubelet[2496]: I0517 00:12:27.232145 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq6gc\" (UniqueName: \"kubernetes.io/projected/3864a724-16bd-4820-9378-5dec85f6df08-kube-api-access-vq6gc\") pod \"coredns-7c65d6cfc9-8s87x\" (UID: \"3864a724-16bd-4820-9378-5dec85f6df08\") " pod="kube-system/coredns-7c65d6cfc9-8s87x" May 17 00:12:27.454338 kubelet[2496]: E0517 00:12:27.450235 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:27.455221 containerd[1466]: time="2025-05-17T00:12:27.454773459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8s87x,Uid:3864a724-16bd-4820-9378-5dec85f6df08,Namespace:kube-system,Attempt:0,}" May 17 00:12:27.460195 kubelet[2496]: E0517 00:12:27.459861 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:27.460865 containerd[1466]: time="2025-05-17T00:12:27.460567809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mzsj5,Uid:8979875f-2ed9-4fe7-be7b-bde155fa4646,Namespace:kube-system,Attempt:0,}" May 17 00:12:27.728004 kubelet[2496]: E0517 00:12:27.727885 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:28.290866 systemd[1]: Started sshd@7-10.0.0.20:22-10.0.0.1:58154.service - OpenSSH per-connection server daemon (10.0.0.1:58154). May 17 00:12:28.374050 sshd[3357]: Accepted publickey for core from 10.0.0.1 port 58154 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:12:28.377278 sshd[3357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:12:28.389101 systemd-logind[1451]: New session 8 of user core. May 17 00:12:28.403713 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:12:28.614002 sshd[3357]: pam_unix(sshd:session): session closed for user core May 17 00:12:28.622838 systemd[1]: sshd@7-10.0.0.20:22-10.0.0.1:58154.service: Deactivated successfully. May 17 00:12:28.627050 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:12:28.630832 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. May 17 00:12:28.636243 systemd-logind[1451]: Removed session 8. May 17 00:12:28.730231 kubelet[2496]: E0517 00:12:28.730183 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:29.655932 systemd-networkd[1406]: cilium_host: Link UP May 17 00:12:29.656104 systemd-networkd[1406]: cilium_net: Link UP May 17 00:12:29.656337 systemd-networkd[1406]: cilium_net: Gained carrier May 17 00:12:29.656587 systemd-networkd[1406]: cilium_host: Gained carrier May 17 00:12:29.731919 kubelet[2496]: E0517 00:12:29.731847 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:29.776436 systemd-networkd[1406]: cilium_vxlan: Link UP May 17 00:12:29.776452 systemd-networkd[1406]: cilium_vxlan: Gained carrier May 17 00:12:30.000436 kernel: NET: Registered PF_ALG protocol family May 17 00:12:30.072642 systemd-networkd[1406]: cilium_net: Gained IPv6LL May 17 00:12:30.376657 systemd-networkd[1406]: cilium_host: Gained IPv6LL May 17 00:12:30.708109 systemd-networkd[1406]: lxc_health: Link UP May 17 00:12:30.720037 systemd-networkd[1406]: lxc_health: Gained carrier May 17 00:12:30.796285 kubelet[2496]: E0517 00:12:30.796241 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:30.809418 kubelet[2496]: I0517 00:12:30.809153 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qcscv" podStartSLOduration=9.796410924 podStartE2EDuration="20.809131216s" podCreationTimestamp="2025-05-17 00:12:10 +0000 UTC" firstStartedPulling="2025-05-17 00:12:11.063379574 +0000 UTC m=+5.851712704" lastFinishedPulling="2025-05-17 00:12:22.076099846 +0000 UTC m=+16.864432996" observedRunningTime="2025-05-17 00:12:27.862106316 +0000 UTC m=+22.650439456" watchObservedRunningTime="2025-05-17 00:12:30.809131216 +0000 UTC m=+25.597464356" May 17 00:12:31.146726 systemd-networkd[1406]: lxc600bd9f5649e: Link UP May 17 00:12:31.155431 kernel: eth0: renamed from tmp0108d May 17 00:12:31.160852 systemd-networkd[1406]: lxc600bd9f5649e: Gained carrier May 17 00:12:31.172936 systemd-networkd[1406]: lxc469bfb09618b: Link UP May 17 00:12:31.175440 kernel: eth0: renamed from tmpf26c8 May 17 00:12:31.185938 systemd-networkd[1406]: lxc469bfb09618b: Gained carrier May 17 00:12:31.656571 systemd-networkd[1406]: cilium_vxlan: Gained IPv6LL May 17 00:12:31.734703 kubelet[2496]: E0517 00:12:31.734673 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:32.296614 systemd-networkd[1406]: lxc600bd9f5649e: Gained IPv6LL May 17 00:12:32.360561 systemd-networkd[1406]: lxc_health: Gained IPv6LL May 17 00:12:32.735981 kubelet[2496]: E0517 00:12:32.735871 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:32.809531 systemd-networkd[1406]: lxc469bfb09618b: Gained IPv6LL May 17 00:12:33.628085 systemd[1]: Started sshd@8-10.0.0.20:22-10.0.0.1:58168.service - OpenSSH per-connection server daemon (10.0.0.1:58168). May 17 00:12:33.664454 sshd[3750]: Accepted publickey for core from 10.0.0.1 port 58168 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:12:33.666272 sshd[3750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:12:33.670438 systemd-logind[1451]: New session 9 of user core. May 17 00:12:33.681542 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:12:33.808657 sshd[3750]: pam_unix(sshd:session): session closed for user core May 17 00:12:33.812772 systemd[1]: sshd@8-10.0.0.20:22-10.0.0.1:58168.service: Deactivated successfully. May 17 00:12:33.814950 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:12:33.815733 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. May 17 00:12:33.816621 systemd-logind[1451]: Removed session 9. May 17 00:12:34.480564 containerd[1466]: time="2025-05-17T00:12:34.480249527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:12:34.481004 containerd[1466]: time="2025-05-17T00:12:34.480580929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:12:34.481004 containerd[1466]: time="2025-05-17T00:12:34.480641853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:34.481004 containerd[1466]: time="2025-05-17T00:12:34.480754676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:34.493835 containerd[1466]: time="2025-05-17T00:12:34.493692703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:12:34.493835 containerd[1466]: time="2025-05-17T00:12:34.493762905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:12:34.493835 containerd[1466]: time="2025-05-17T00:12:34.493774186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:34.494019 containerd[1466]: time="2025-05-17T00:12:34.493864346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:12:34.523532 systemd[1]: Started cri-containerd-0108d6819f00a99e9ffbaaf77e5e3541a118eca4b0c65ea4d64a49bc68d61f37.scope - libcontainer container 0108d6819f00a99e9ffbaaf77e5e3541a118eca4b0c65ea4d64a49bc68d61f37. May 17 00:12:34.525063 systemd[1]: Started cri-containerd-f26c8df32fd7f7eb1ea1a099431d39bc85d950a969cb97559aac610d1fbc6b9e.scope - libcontainer container f26c8df32fd7f7eb1ea1a099431d39bc85d950a969cb97559aac610d1fbc6b9e. May 17 00:12:34.537468 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:12:34.538731 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:12:34.562677 containerd[1466]: time="2025-05-17T00:12:34.562630535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8s87x,Uid:3864a724-16bd-4820-9378-5dec85f6df08,Namespace:kube-system,Attempt:0,} returns sandbox id \"f26c8df32fd7f7eb1ea1a099431d39bc85d950a969cb97559aac610d1fbc6b9e\"" May 17 00:12:34.567611 kubelet[2496]: E0517 00:12:34.565903 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:34.569233 containerd[1466]: time="2025-05-17T00:12:34.569197753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mzsj5,Uid:8979875f-2ed9-4fe7-be7b-bde155fa4646,Namespace:kube-system,Attempt:0,} returns sandbox id \"0108d6819f00a99e9ffbaaf77e5e3541a118eca4b0c65ea4d64a49bc68d61f37\"" May 17 00:12:34.569562 containerd[1466]: time="2025-05-17T00:12:34.569534045Z" level=info msg="CreateContainer within sandbox \"f26c8df32fd7f7eb1ea1a099431d39bc85d950a969cb97559aac610d1fbc6b9e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:12:34.570148 kubelet[2496]: E0517 00:12:34.570120 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:34.573261 containerd[1466]: time="2025-05-17T00:12:34.573134295Z" level=info msg="CreateContainer within sandbox \"0108d6819f00a99e9ffbaaf77e5e3541a118eca4b0c65ea4d64a49bc68d61f37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:12:34.600311 containerd[1466]: time="2025-05-17T00:12:34.600269507Z" level=info msg="CreateContainer within sandbox \"0108d6819f00a99e9ffbaaf77e5e3541a118eca4b0c65ea4d64a49bc68d61f37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"929dc1f44adbe6d65562221ae09bec74feef4056905b1c07fa5f8a8849f5846d\"" May 17 00:12:34.600788 containerd[1466]: time="2025-05-17T00:12:34.600758265Z" level=info msg="StartContainer for \"929dc1f44adbe6d65562221ae09bec74feef4056905b1c07fa5f8a8849f5846d\"" May 17 00:12:34.604989 containerd[1466]: time="2025-05-17T00:12:34.604958653Z" level=info msg="CreateContainer within sandbox \"f26c8df32fd7f7eb1ea1a099431d39bc85d950a969cb97559aac610d1fbc6b9e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a196ad7a0fffdaa58683d277efc0e74278a9a938a8dd5932a9e84c906e5880d5\"" May 17 00:12:34.605420 containerd[1466]: time="2025-05-17T00:12:34.605372812Z" level=info msg="StartContainer for \"a196ad7a0fffdaa58683d277efc0e74278a9a938a8dd5932a9e84c906e5880d5\"" May 17 00:12:34.628543 systemd[1]: Started cri-containerd-929dc1f44adbe6d65562221ae09bec74feef4056905b1c07fa5f8a8849f5846d.scope - libcontainer container 929dc1f44adbe6d65562221ae09bec74feef4056905b1c07fa5f8a8849f5846d. May 17 00:12:34.631194 systemd[1]: Started cri-containerd-a196ad7a0fffdaa58683d277efc0e74278a9a938a8dd5932a9e84c906e5880d5.scope - libcontainer container a196ad7a0fffdaa58683d277efc0e74278a9a938a8dd5932a9e84c906e5880d5. May 17 00:12:34.656605 containerd[1466]: time="2025-05-17T00:12:34.656514913Z" level=info msg="StartContainer for \"a196ad7a0fffdaa58683d277efc0e74278a9a938a8dd5932a9e84c906e5880d5\" returns successfully" May 17 00:12:34.659831 containerd[1466]: time="2025-05-17T00:12:34.659797044Z" level=info msg="StartContainer for \"929dc1f44adbe6d65562221ae09bec74feef4056905b1c07fa5f8a8849f5846d\" returns successfully" May 17 00:12:34.743104 kubelet[2496]: E0517 00:12:34.741998 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:34.745302 kubelet[2496]: E0517 00:12:34.745057 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:34.763389 kubelet[2496]: I0517 00:12:34.763320 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8s87x" podStartSLOduration=24.763300809 podStartE2EDuration="24.763300809s" podCreationTimestamp="2025-05-17 00:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:12:34.755264809 +0000 UTC m=+29.543597939" watchObservedRunningTime="2025-05-17 00:12:34.763300809 +0000 UTC m=+29.551633949" May 17 00:12:35.746155 kubelet[2496]: E0517 00:12:35.746127 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:35.746155 kubelet[2496]: E0517 00:12:35.746157 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:36.748128 kubelet[2496]: E0517 00:12:36.748088 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:36.748576 kubelet[2496]: E0517 00:12:36.748291 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:12:38.823352 systemd[1]: Started sshd@9-10.0.0.20:22-10.0.0.1:35856.service - OpenSSH per-connection server daemon (10.0.0.1:35856). May 17 00:12:38.859230 sshd[3935]: Accepted publickey for core from 10.0.0.1 port 35856 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:12:38.860911 sshd[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:12:38.864789 systemd-logind[1451]: New session 10 of user core. May 17 00:12:38.874589 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:12:38.997125 sshd[3935]: pam_unix(sshd:session): session closed for user core May 17 00:12:39.000777 systemd[1]: sshd@9-10.0.0.20:22-10.0.0.1:35856.service: Deactivated successfully. May 17 00:12:39.002888 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:12:39.003535 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. May 17 00:12:39.004309 systemd-logind[1451]: Removed session 10. May 17 00:12:44.008660 systemd[1]: Started sshd@10-10.0.0.20:22-10.0.0.1:35858.service - OpenSSH per-connection server daemon (10.0.0.1:35858). May 17 00:12:44.040318 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 35858 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:12:44.041770 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:12:44.045582 systemd-logind[1451]: New session 11 of user core. May 17 00:12:44.055518 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:12:44.167667 sshd[3953]: pam_unix(sshd:session): session closed for user core May 17 00:12:44.171756 systemd[1]: sshd@10-10.0.0.20:22-10.0.0.1:35858.service: Deactivated successfully. May 17 00:12:44.173965 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:12:44.174562 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. May 17 00:12:44.175483 systemd-logind[1451]: Removed session 11. May 17 00:12:49.182539 systemd[1]: Started sshd@11-10.0.0.20:22-10.0.0.1:45192.service - OpenSSH per-connection server daemon (10.0.0.1:45192). May 17 00:12:49.215128 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 45192 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:12:49.216888 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:12:49.220558 systemd-logind[1451]: New session 12 of user core. May 17 00:12:49.228557 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:12:49.339929 sshd[3969]: pam_unix(sshd:session): session closed for user core May 17 00:12:49.349456 systemd[1]: sshd@11-10.0.0.20:22-10.0.0.1:45192.service: Deactivated successfully. May 17 00:12:49.351320 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:12:49.352990 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. May 17 00:12:49.354334 systemd[1]: Started sshd@12-10.0.0.20:22-10.0.0.1:45204.service - OpenSSH per-connection server daemon (10.0.0.1:45204). May 17 00:12:49.355542 systemd-logind[1451]: Removed session 12. May 17 00:12:49.401817 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 45204 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:12:49.403320 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:12:49.407684 systemd-logind[1451]: New session 13 of user core. May 17 00:12:49.417512 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:12:49.566772 sshd[3984]: pam_unix(sshd:session): session closed for user core May 17 00:12:49.578558 systemd[1]: sshd@12-10.0.0.20:22-10.0.0.1:45204.service: Deactivated successfully. May 17 00:12:49.581098 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:12:49.583360 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. May 17 00:12:49.596840 systemd[1]: Started sshd@13-10.0.0.20:22-10.0.0.1:45214.service - OpenSSH per-connection server daemon (10.0.0.1:45214). May 17 00:12:49.599050 systemd-logind[1451]: Removed session 13. May 17 00:12:49.643808 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 45214 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:12:49.644866 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:12:49.648861 systemd-logind[1451]: New session 14 of user core. May 17 00:12:49.659560 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:12:49.779671 sshd[3997]: pam_unix(sshd:session): session closed for user core May 17 00:12:49.783794 systemd[1]: sshd@13-10.0.0.20:22-10.0.0.1:45214.service: Deactivated successfully. May 17 00:12:49.786036 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:12:49.786732 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. May 17 00:12:49.787681 systemd-logind[1451]: Removed session 14. May 17 00:12:54.791823 systemd[1]: Started sshd@14-10.0.0.20:22-10.0.0.1:45228.service - OpenSSH per-connection server daemon (10.0.0.1:45228). May 17 00:12:54.825834 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 45228 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:12:54.827577 sshd[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:12:54.831615 systemd-logind[1451]: New session 15 of user core. May 17 00:12:54.842537 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:12:54.952024 sshd[4011]: pam_unix(sshd:session): session closed for user core May 17 00:12:54.955744 systemd[1]: sshd@14-10.0.0.20:22-10.0.0.1:45228.service: Deactivated successfully. May 17 00:12:54.957837 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:12:54.958421 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. May 17 00:12:54.959280 systemd-logind[1451]: Removed session 15. May 17 00:12:59.963306 systemd[1]: Started sshd@15-10.0.0.20:22-10.0.0.1:46186.service - OpenSSH per-connection server daemon (10.0.0.1:46186). May 17 00:12:59.995145 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 46186 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:12:59.996723 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:00.000440 systemd-logind[1451]: New session 16 of user core. May 17 00:13:00.007513 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:13:00.113345 sshd[4025]: pam_unix(sshd:session): session closed for user core May 17 00:13:00.123523 systemd[1]: sshd@15-10.0.0.20:22-10.0.0.1:46186.service: Deactivated successfully. May 17 00:13:00.125622 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:13:00.127075 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. May 17 00:13:00.140753 systemd[1]: Started sshd@16-10.0.0.20:22-10.0.0.1:46198.service - OpenSSH per-connection server daemon (10.0.0.1:46198). May 17 00:13:00.141738 systemd-logind[1451]: Removed session 16. May 17 00:13:00.167831 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 46198 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:13:00.169519 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:00.173231 systemd-logind[1451]: New session 17 of user core. May 17 00:13:00.182522 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:13:00.353439 sshd[4039]: pam_unix(sshd:session): session closed for user core May 17 00:13:00.363103 systemd[1]: sshd@16-10.0.0.20:22-10.0.0.1:46198.service: Deactivated successfully. May 17 00:13:00.364831 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:13:00.366266 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. May 17 00:13:00.375645 systemd[1]: Started sshd@17-10.0.0.20:22-10.0.0.1:46204.service - OpenSSH per-connection server daemon (10.0.0.1:46204). May 17 00:13:00.376636 systemd-logind[1451]: Removed session 17. May 17 00:13:00.407350 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 46204 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:13:00.408711 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:00.412735 systemd-logind[1451]: New session 18 of user core. May 17 00:13:00.427524 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:13:01.811276 sshd[4051]: pam_unix(sshd:session): session closed for user core May 17 00:13:01.822370 systemd[1]: sshd@17-10.0.0.20:22-10.0.0.1:46204.service: Deactivated successfully. May 17 00:13:01.824974 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:13:01.827330 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. May 17 00:13:01.835075 systemd[1]: Started sshd@18-10.0.0.20:22-10.0.0.1:46208.service - OpenSSH per-connection server daemon (10.0.0.1:46208). May 17 00:13:01.840675 systemd-logind[1451]: Removed session 18. May 17 00:13:01.862954 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 46208 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:13:01.864650 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:01.869543 systemd-logind[1451]: New session 19 of user core. May 17 00:13:01.883691 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:13:02.114637 sshd[4071]: pam_unix(sshd:session): session closed for user core May 17 00:13:02.124556 systemd[1]: sshd@18-10.0.0.20:22-10.0.0.1:46208.service: Deactivated successfully. May 17 00:13:02.126562 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:13:02.128224 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. May 17 00:13:02.133643 systemd[1]: Started sshd@19-10.0.0.20:22-10.0.0.1:46218.service - OpenSSH per-connection server daemon (10.0.0.1:46218). May 17 00:13:02.134638 systemd-logind[1451]: Removed session 19. May 17 00:13:02.162041 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 46218 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:13:02.163629 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:02.167580 systemd-logind[1451]: New session 20 of user core. May 17 00:13:02.179529 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:13:02.285552 sshd[4083]: pam_unix(sshd:session): session closed for user core May 17 00:13:02.289472 systemd[1]: sshd@19-10.0.0.20:22-10.0.0.1:46218.service: Deactivated successfully. May 17 00:13:02.291284 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:13:02.291947 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. May 17 00:13:02.292914 systemd-logind[1451]: Removed session 20. May 17 00:13:07.297311 systemd[1]: Started sshd@20-10.0.0.20:22-10.0.0.1:46232.service - OpenSSH per-connection server daemon (10.0.0.1:46232). May 17 00:13:07.331225 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 46232 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:13:07.333276 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:07.338173 systemd-logind[1451]: New session 21 of user core. May 17 00:13:07.347526 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:13:07.456891 sshd[4099]: pam_unix(sshd:session): session closed for user core May 17 00:13:07.460964 systemd[1]: sshd@20-10.0.0.20:22-10.0.0.1:46232.service: Deactivated successfully. May 17 00:13:07.462983 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:13:07.463780 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. May 17 00:13:07.464744 systemd-logind[1451]: Removed session 21. May 17 00:13:12.476801 systemd[1]: Started sshd@21-10.0.0.20:22-10.0.0.1:41542.service - OpenSSH per-connection server daemon (10.0.0.1:41542). May 17 00:13:12.509181 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 41542 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:13:12.510822 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:12.514859 systemd-logind[1451]: New session 22 of user core. May 17 00:13:12.529661 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:13:12.641741 sshd[4118]: pam_unix(sshd:session): session closed for user core May 17 00:13:12.645218 systemd[1]: sshd@21-10.0.0.20:22-10.0.0.1:41542.service: Deactivated successfully. May 17 00:13:12.646931 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:13:12.647621 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. May 17 00:13:12.648418 systemd-logind[1451]: Removed session 22. May 17 00:13:17.656595 systemd[1]: Started sshd@22-10.0.0.20:22-10.0.0.1:41552.service - OpenSSH per-connection server daemon (10.0.0.1:41552). May 17 00:13:17.688069 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 41552 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:13:17.689668 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:17.694280 systemd-logind[1451]: New session 23 of user core. May 17 00:13:17.700553 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:13:17.805101 sshd[4133]: pam_unix(sshd:session): session closed for user core May 17 00:13:17.808608 systemd[1]: sshd@22-10.0.0.20:22-10.0.0.1:41552.service: Deactivated successfully. May 17 00:13:17.810577 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:13:17.811127 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. May 17 00:13:17.811892 systemd-logind[1451]: Removed session 23. May 17 00:13:22.816322 systemd[1]: Started sshd@23-10.0.0.20:22-10.0.0.1:45552.service - OpenSSH per-connection server daemon (10.0.0.1:45552). May 17 00:13:22.847869 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 45552 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:13:22.849511 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:22.853664 systemd-logind[1451]: New session 24 of user core. May 17 00:13:22.868615 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:13:22.973192 sshd[4147]: pam_unix(sshd:session): session closed for user core May 17 00:13:22.991742 systemd[1]: sshd@23-10.0.0.20:22-10.0.0.1:45552.service: Deactivated successfully. May 17 00:13:22.994039 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:13:22.995607 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. May 17 00:13:23.001667 systemd[1]: Started sshd@24-10.0.0.20:22-10.0.0.1:45568.service - OpenSSH per-connection server daemon (10.0.0.1:45568). May 17 00:13:23.002577 systemd-logind[1451]: Removed session 24. May 17 00:13:23.030609 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 45568 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:13:23.032090 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:23.036069 systemd-logind[1451]: New session 25 of user core. May 17 00:13:23.043518 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 00:13:24.295119 kubelet[2496]: E0517 00:13:24.295072 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:24.354610 kubelet[2496]: I0517 00:13:24.354530 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mzsj5" podStartSLOduration=74.354513308 podStartE2EDuration="1m14.354513308s" podCreationTimestamp="2025-05-17 00:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:12:34.780311023 +0000 UTC m=+29.568644163" watchObservedRunningTime="2025-05-17 00:13:24.354513308 +0000 UTC m=+79.142846449" May 17 00:13:24.371327 containerd[1466]: time="2025-05-17T00:13:24.371282577Z" level=info msg="StopContainer for \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\" with timeout 30 (s)" May 17 00:13:24.371741 containerd[1466]: time="2025-05-17T00:13:24.371638855Z" level=info msg="Stop container \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\" with signal terminated" May 17 00:13:24.383693 systemd[1]: cri-containerd-20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb.scope: Deactivated successfully. May 17 00:13:24.401726 containerd[1466]: time="2025-05-17T00:13:24.401644096Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:13:24.401977 containerd[1466]: time="2025-05-17T00:13:24.401956160Z" level=info msg="StopContainer for \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\" with timeout 2 (s)" May 17 00:13:24.402149 containerd[1466]: time="2025-05-17T00:13:24.402121716Z" level=info msg="Stop container \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\" with signal terminated" May 17 00:13:24.405201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb-rootfs.mount: Deactivated successfully. May 17 00:13:24.409031 systemd-networkd[1406]: lxc_health: Link DOWN May 17 00:13:24.409038 systemd-networkd[1406]: lxc_health: Lost carrier May 17 00:13:24.416346 containerd[1466]: time="2025-05-17T00:13:24.416284121Z" level=info msg="shim disconnected" id=20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb namespace=k8s.io May 17 00:13:24.416346 containerd[1466]: time="2025-05-17T00:13:24.416338785Z" level=warning msg="cleaning up after shim disconnected" id=20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb namespace=k8s.io May 17 00:13:24.416475 containerd[1466]: time="2025-05-17T00:13:24.416348383Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:13:24.428158 systemd[1]: cri-containerd-922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7.scope: Deactivated successfully. May 17 00:13:24.428483 systemd[1]: cri-containerd-922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7.scope: Consumed 7.267s CPU time. May 17 00:13:24.434865 containerd[1466]: time="2025-05-17T00:13:24.434831615Z" level=info msg="StopContainer for \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\" returns successfully" May 17 00:13:24.438788 containerd[1466]: time="2025-05-17T00:13:24.438750848Z" level=info msg="StopPodSandbox for \"7dfc19f1987cec1abc7c0b255a5bae2686f162aa75f0b5f3d9a6c20fd956bec7\"" May 17 00:13:24.438892 containerd[1466]: time="2025-05-17T00:13:24.438795182Z" level=info msg="Container to stop \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:13:24.440759 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7dfc19f1987cec1abc7c0b255a5bae2686f162aa75f0b5f3d9a6c20fd956bec7-shm.mount: Deactivated successfully. May 17 00:13:24.447340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7-rootfs.mount: Deactivated successfully. May 17 00:13:24.457143 systemd[1]: cri-containerd-7dfc19f1987cec1abc7c0b255a5bae2686f162aa75f0b5f3d9a6c20fd956bec7.scope: Deactivated successfully. May 17 00:13:24.459958 containerd[1466]: time="2025-05-17T00:13:24.459903001Z" level=info msg="shim disconnected" id=922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7 namespace=k8s.io May 17 00:13:24.459958 containerd[1466]: time="2025-05-17T00:13:24.459957265Z" level=warning msg="cleaning up after shim disconnected" id=922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7 namespace=k8s.io May 17 00:13:24.459958 containerd[1466]: time="2025-05-17T00:13:24.459967494Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:13:24.476064 containerd[1466]: time="2025-05-17T00:13:24.476028936Z" level=info msg="StopContainer for \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\" returns successfully" May 17 00:13:24.476687 containerd[1466]: time="2025-05-17T00:13:24.476665778Z" level=info msg="StopPodSandbox for \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\"" May 17 00:13:24.476750 containerd[1466]: time="2025-05-17T00:13:24.476694683Z" level=info msg="Container to stop \"5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:13:24.476750 containerd[1466]: time="2025-05-17T00:13:24.476708419Z" level=info msg="Container to stop \"d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:13:24.476750 containerd[1466]: time="2025-05-17T00:13:24.476717316Z" level=info msg="Container to stop \"7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:13:24.476750 containerd[1466]: time="2025-05-17T00:13:24.476726283Z" level=info msg="Container to stop \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:13:24.476750 containerd[1466]: time="2025-05-17T00:13:24.476735942Z" level=info msg="Container to stop \"f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:13:24.477741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dfc19f1987cec1abc7c0b255a5bae2686f162aa75f0b5f3d9a6c20fd956bec7-rootfs.mount: Deactivated successfully. May 17 00:13:24.483522 systemd[1]: cri-containerd-79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1.scope: Deactivated successfully. May 17 00:13:24.484209 containerd[1466]: time="2025-05-17T00:13:24.484160688Z" level=info msg="shim disconnected" id=7dfc19f1987cec1abc7c0b255a5bae2686f162aa75f0b5f3d9a6c20fd956bec7 namespace=k8s.io May 17 00:13:24.484289 containerd[1466]: time="2025-05-17T00:13:24.484208960Z" level=warning msg="cleaning up after shim disconnected" id=7dfc19f1987cec1abc7c0b255a5bae2686f162aa75f0b5f3d9a6c20fd956bec7 namespace=k8s.io May 17 00:13:24.484289 containerd[1466]: time="2025-05-17T00:13:24.484229840Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:13:24.505863 containerd[1466]: time="2025-05-17T00:13:24.505815870Z" level=info msg="TearDown network for sandbox \"7dfc19f1987cec1abc7c0b255a5bae2686f162aa75f0b5f3d9a6c20fd956bec7\" successfully" May 17 00:13:24.505863 containerd[1466]: time="2025-05-17T00:13:24.505848983Z" level=info msg="StopPodSandbox for \"7dfc19f1987cec1abc7c0b255a5bae2686f162aa75f0b5f3d9a6c20fd956bec7\" returns successfully" May 17 00:13:24.519741 containerd[1466]: time="2025-05-17T00:13:24.519518189Z" level=info msg="shim disconnected" id=79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1 namespace=k8s.io May 17 00:13:24.519741 containerd[1466]: time="2025-05-17T00:13:24.519578714Z" level=warning msg="cleaning up after shim disconnected" id=79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1 namespace=k8s.io May 17 00:13:24.519741 containerd[1466]: time="2025-05-17T00:13:24.519593162Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:13:24.534820 containerd[1466]: time="2025-05-17T00:13:24.534756523Z" level=info msg="TearDown network for sandbox \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\" successfully" May 17 00:13:24.534820 containerd[1466]: time="2025-05-17T00:13:24.534791389Z" level=info msg="StopPodSandbox for \"79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1\" returns successfully" May 17 00:13:24.560202 kubelet[2496]: I0517 00:13:24.560078 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a36152b-cd25-402b-8771-e96771121b3f-cilium-config-path\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.560202 kubelet[2496]: I0517 00:13:24.560158 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-cni-path\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.560376 kubelet[2496]: I0517 00:13:24.560206 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-cni-path" (OuterVolumeSpecName: "cni-path") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:13:24.560376 kubelet[2496]: I0517 00:13:24.560264 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzlg6\" (UniqueName: \"kubernetes.io/projected/3a36152b-cd25-402b-8771-e96771121b3f-kube-api-access-xzlg6\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.560748 kubelet[2496]: I0517 00:13:24.560631 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-cilium-run\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.560748 kubelet[2496]: I0517 00:13:24.560665 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:13:24.560748 kubelet[2496]: I0517 00:13:24.560714 2496 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-cni-path\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.560748 kubelet[2496]: I0517 00:13:24.560727 2496 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-cilium-run\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.563453 kubelet[2496]: I0517 00:13:24.563435 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a36152b-cd25-402b-8771-e96771121b3f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:13:24.563775 kubelet[2496]: I0517 00:13:24.563756 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a36152b-cd25-402b-8771-e96771121b3f-kube-api-access-xzlg6" (OuterVolumeSpecName: "kube-api-access-xzlg6") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "kube-api-access-xzlg6". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:13:24.660915 kubelet[2496]: I0517 00:13:24.660879 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a36152b-cd25-402b-8771-e96771121b3f-hubble-tls\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.660915 kubelet[2496]: I0517 00:13:24.660913 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-hostproc\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.661064 kubelet[2496]: I0517 00:13:24.660929 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/646684e3-f939-4882-898c-878848df2573-cilium-config-path\") pod \"646684e3-f939-4882-898c-878848df2573\" (UID: \"646684e3-f939-4882-898c-878848df2573\") " May 17 00:13:24.661064 kubelet[2496]: I0517 00:13:24.660942 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-cilium-cgroup\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.661064 kubelet[2496]: I0517 00:13:24.660957 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-xtables-lock\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.661064 kubelet[2496]: I0517 00:13:24.660972 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmh5n\" (UniqueName: \"kubernetes.io/projected/646684e3-f939-4882-898c-878848df2573-kube-api-access-jmh5n\") pod \"646684e3-f939-4882-898c-878848df2573\" (UID: \"646684e3-f939-4882-898c-878848df2573\") " May 17 00:13:24.661064 kubelet[2496]: I0517 00:13:24.660985 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-etc-cni-netd\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.661064 kubelet[2496]: I0517 00:13:24.660998 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-bpf-maps\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.661227 kubelet[2496]: I0517 00:13:24.661013 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-host-proc-sys-net\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.661227 kubelet[2496]: I0517 00:13:24.661027 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a36152b-cd25-402b-8771-e96771121b3f-clustermesh-secrets\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.661227 kubelet[2496]: I0517 00:13:24.661040 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-host-proc-sys-kernel\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.661227 kubelet[2496]: I0517 00:13:24.661056 2496 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-lib-modules\") pod \"3a36152b-cd25-402b-8771-e96771121b3f\" (UID: \"3a36152b-cd25-402b-8771-e96771121b3f\") " May 17 00:13:24.661227 kubelet[2496]: I0517 00:13:24.661079 2496 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a36152b-cd25-402b-8771-e96771121b3f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.661227 kubelet[2496]: I0517 00:13:24.661088 2496 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzlg6\" (UniqueName: \"kubernetes.io/projected/3a36152b-cd25-402b-8771-e96771121b3f-kube-api-access-xzlg6\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.661623 kubelet[2496]: I0517 00:13:24.661024 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:13:24.661623 kubelet[2496]: I0517 00:13:24.661038 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-hostproc" (OuterVolumeSpecName: "hostproc") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:13:24.661623 kubelet[2496]: I0517 00:13:24.661109 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:13:24.661623 kubelet[2496]: I0517 00:13:24.661122 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:13:24.661623 kubelet[2496]: I0517 00:13:24.661466 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:13:24.661760 kubelet[2496]: I0517 00:13:24.661482 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:13:24.661760 kubelet[2496]: I0517 00:13:24.661491 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:13:24.664297 kubelet[2496]: I0517 00:13:24.664249 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:13:24.664346 kubelet[2496]: I0517 00:13:24.664309 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/646684e3-f939-4882-898c-878848df2573-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "646684e3-f939-4882-898c-878848df2573" (UID: "646684e3-f939-4882-898c-878848df2573"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:13:24.664386 kubelet[2496]: I0517 00:13:24.664370 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a36152b-cd25-402b-8771-e96771121b3f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:13:24.664439 kubelet[2496]: I0517 00:13:24.664429 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646684e3-f939-4882-898c-878848df2573-kube-api-access-jmh5n" (OuterVolumeSpecName: "kube-api-access-jmh5n") pod "646684e3-f939-4882-898c-878848df2573" (UID: "646684e3-f939-4882-898c-878848df2573"). InnerVolumeSpecName "kube-api-access-jmh5n". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:13:24.666213 kubelet[2496]: I0517 00:13:24.666188 2496 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a36152b-cd25-402b-8771-e96771121b3f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3a36152b-cd25-402b-8771-e96771121b3f" (UID: "3a36152b-cd25-402b-8771-e96771121b3f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:13:24.761589 kubelet[2496]: I0517 00:13:24.761543 2496 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.761589 kubelet[2496]: I0517 00:13:24.761579 2496 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a36152b-cd25-402b-8771-e96771121b3f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.761589 kubelet[2496]: I0517 00:13:24.761590 2496 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.761589 kubelet[2496]: I0517 00:13:24.761601 2496 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-lib-modules\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.761810 kubelet[2496]: I0517 00:13:24.761612 2496 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a36152b-cd25-402b-8771-e96771121b3f-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.761810 kubelet[2496]: I0517 00:13:24.761623 2496 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-hostproc\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.761810 kubelet[2496]: I0517 00:13:24.761633 2496 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/646684e3-f939-4882-898c-878848df2573-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.761810 kubelet[2496]: I0517 00:13:24.761645 2496 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.761810 kubelet[2496]: I0517 00:13:24.761655 2496 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.761810 kubelet[2496]: I0517 00:13:24.761665 2496 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmh5n\" (UniqueName: \"kubernetes.io/projected/646684e3-f939-4882-898c-878848df2573-kube-api-access-jmh5n\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.761810 kubelet[2496]: I0517 00:13:24.761676 2496 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.761810 kubelet[2496]: I0517 00:13:24.761686 2496 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a36152b-cd25-402b-8771-e96771121b3f-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 17 00:13:24.835931 kubelet[2496]: I0517 00:13:24.835826 2496 scope.go:117] "RemoveContainer" containerID="922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7" May 17 00:13:24.840794 containerd[1466]: time="2025-05-17T00:13:24.840390182Z" level=info msg="RemoveContainer for \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\"" May 17 00:13:24.843673 systemd[1]: Removed slice kubepods-burstable-pod3a36152b_cd25_402b_8771_e96771121b3f.slice - libcontainer container kubepods-burstable-pod3a36152b_cd25_402b_8771_e96771121b3f.slice. May 17 00:13:24.843956 systemd[1]: kubepods-burstable-pod3a36152b_cd25_402b_8771_e96771121b3f.slice: Consumed 7.364s CPU time. May 17 00:13:24.846595 systemd[1]: Removed slice kubepods-besteffort-pod646684e3_f939_4882_898c_878848df2573.slice - libcontainer container kubepods-besteffort-pod646684e3_f939_4882_898c_878848df2573.slice. May 17 00:13:24.862075 containerd[1466]: time="2025-05-17T00:13:24.862033050Z" level=info msg="RemoveContainer for \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\" returns successfully" May 17 00:13:24.862616 kubelet[2496]: I0517 00:13:24.862567 2496 scope.go:117] "RemoveContainer" containerID="7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b" May 17 00:13:24.863811 containerd[1466]: time="2025-05-17T00:13:24.863774907Z" level=info msg="RemoveContainer for \"7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b\"" May 17 00:13:24.867461 containerd[1466]: time="2025-05-17T00:13:24.867431309Z" level=info msg="RemoveContainer for \"7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b\" returns successfully" May 17 00:13:24.867604 kubelet[2496]: I0517 00:13:24.867579 2496 scope.go:117] "RemoveContainer" containerID="f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db" May 17 00:13:24.868780 containerd[1466]: time="2025-05-17T00:13:24.868750871Z" level=info msg="RemoveContainer for \"f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db\"" May 17 00:13:24.872299 containerd[1466]: time="2025-05-17T00:13:24.872265303Z" level=info msg="RemoveContainer for \"f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db\" returns successfully" May 17 00:13:24.872541 kubelet[2496]: I0517 00:13:24.872447 2496 scope.go:117] "RemoveContainer" containerID="d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3" May 17 00:13:24.873464 containerd[1466]: time="2025-05-17T00:13:24.873439979Z" level=info msg="RemoveContainer for \"d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3\"" May 17 00:13:24.876768 containerd[1466]: time="2025-05-17T00:13:24.876733960Z" level=info msg="RemoveContainer for \"d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3\" returns successfully" May 17 00:13:24.876928 kubelet[2496]: I0517 00:13:24.876902 2496 scope.go:117] "RemoveContainer" containerID="5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a" May 17 00:13:24.877863 containerd[1466]: time="2025-05-17T00:13:24.877837663Z" level=info msg="RemoveContainer for \"5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a\"" May 17 00:13:24.882885 containerd[1466]: time="2025-05-17T00:13:24.882861227Z" level=info msg="RemoveContainer for \"5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a\" returns successfully" May 17 00:13:24.883028 kubelet[2496]: I0517 00:13:24.883003 2496 scope.go:117] "RemoveContainer" containerID="922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7" May 17 00:13:24.886097 containerd[1466]: time="2025-05-17T00:13:24.886052804Z" level=error msg="ContainerStatus for \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\": not found" May 17 00:13:24.897412 kubelet[2496]: E0517 00:13:24.897358 2496 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\": not found" containerID="922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7" May 17 00:13:24.897543 kubelet[2496]: I0517 00:13:24.897395 2496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7"} err="failed to get container status \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"922adf3692d032e919fefc30e8a4542452cc2144f9272933b77828901b3977b7\": not found" May 17 00:13:24.897543 kubelet[2496]: I0517 00:13:24.897480 2496 scope.go:117] "RemoveContainer" containerID="7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b" May 17 00:13:24.897710 containerd[1466]: time="2025-05-17T00:13:24.897672408Z" level=error msg="ContainerStatus for \"7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b\": not found" May 17 00:13:24.897827 kubelet[2496]: E0517 00:13:24.897794 2496 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b\": not found" containerID="7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b" May 17 00:13:24.897827 kubelet[2496]: I0517 00:13:24.897818 2496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b"} err="failed to get container status \"7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c7f86f63559068687169c0ff56b401dca8db5d45d91588ca91d527b398ea00b\": not found" May 17 00:13:24.897905 kubelet[2496]: I0517 00:13:24.897834 2496 scope.go:117] "RemoveContainer" containerID="f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db" May 17 00:13:24.898094 containerd[1466]: time="2025-05-17T00:13:24.898036761Z" level=error msg="ContainerStatus for \"f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db\": not found" May 17 00:13:24.898197 kubelet[2496]: E0517 00:13:24.898173 2496 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db\": not found" containerID="f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db" May 17 00:13:24.898231 kubelet[2496]: I0517 00:13:24.898193 2496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db"} err="failed to get container status \"f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db\": rpc error: code = NotFound desc = an error occurred when try to find container \"f56eb10a0c4e001ad6c072f8e5bbe78d44150e00b1896cac2c871583d8e0f8db\": not found" May 17 00:13:24.898231 kubelet[2496]: I0517 00:13:24.898206 2496 scope.go:117] "RemoveContainer" containerID="d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3" May 17 00:13:24.898381 containerd[1466]: time="2025-05-17T00:13:24.898349948Z" level=error msg="ContainerStatus for \"d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3\": not found" May 17 00:13:24.898517 kubelet[2496]: E0517 00:13:24.898495 2496 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3\": not found" containerID="d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3" May 17 00:13:24.898555 kubelet[2496]: I0517 00:13:24.898523 2496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3"} err="failed to get container status \"d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1aea78d3eddbb36e1e9fced511a97bdf21c94c13bd125067492438ec011d8f3\": not found" May 17 00:13:24.898555 kubelet[2496]: I0517 00:13:24.898547 2496 scope.go:117] "RemoveContainer" containerID="5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a" May 17 00:13:24.898715 containerd[1466]: time="2025-05-17T00:13:24.898689635Z" level=error msg="ContainerStatus for \"5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a\": not found" May 17 00:13:24.898821 kubelet[2496]: E0517 00:13:24.898799 2496 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a\": not found" containerID="5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a" May 17 00:13:24.898850 kubelet[2496]: I0517 00:13:24.898823 2496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a"} err="failed to get container status \"5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5cbf0491b41d8f123760ae833b04cab7e408da0267024e899f54e7756da8d70a\": not found" May 17 00:13:24.898850 kubelet[2496]: I0517 00:13:24.898837 2496 scope.go:117] "RemoveContainer" containerID="20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb" May 17 00:13:24.899876 containerd[1466]: time="2025-05-17T00:13:24.899851096Z" level=info msg="RemoveContainer for \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\"" May 17 00:13:24.903137 containerd[1466]: time="2025-05-17T00:13:24.903098359Z" level=info msg="RemoveContainer for \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\" returns successfully" May 17 00:13:24.903276 kubelet[2496]: I0517 00:13:24.903255 2496 scope.go:117] "RemoveContainer" containerID="20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb" May 17 00:13:24.903440 containerd[1466]: time="2025-05-17T00:13:24.903411145Z" level=error msg="ContainerStatus for \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\": not found" May 17 00:13:24.903525 kubelet[2496]: E0517 00:13:24.903496 2496 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\": not found" containerID="20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb" May 17 00:13:24.903562 kubelet[2496]: I0517 00:13:24.903528 2496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb"} err="failed to get container status \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\": rpc error: code = NotFound desc = an error occurred when try to find container \"20ae25e3f2de97f6cc7de4959f3f3f1ae6585c0a5370836a3da6cae7a7b5cffb\": not found" May 17 00:13:25.296528 kubelet[2496]: I0517 00:13:25.296496 2496 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a36152b-cd25-402b-8771-e96771121b3f" path="/var/lib/kubelet/pods/3a36152b-cd25-402b-8771-e96771121b3f/volumes" May 17 00:13:25.297387 kubelet[2496]: I0517 00:13:25.297363 2496 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646684e3-f939-4882-898c-878848df2573" path="/var/lib/kubelet/pods/646684e3-f939-4882-898c-878848df2573/volumes" May 17 00:13:25.342321 kubelet[2496]: E0517 00:13:25.342281 2496 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:13:25.378861 systemd[1]: var-lib-kubelet-pods-646684e3\x2df939\x2d4882\x2d898c\x2d878848df2573-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djmh5n.mount: Deactivated successfully. May 17 00:13:25.378965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1-rootfs.mount: Deactivated successfully. May 17 00:13:25.379037 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79d02377c43adbccbfaa264a54eca5c9c965daf94a5bb5684737b9a7ace198f1-shm.mount: Deactivated successfully. May 17 00:13:25.379110 systemd[1]: var-lib-kubelet-pods-3a36152b\x2dcd25\x2d402b\x2d8771\x2de96771121b3f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:13:25.379204 systemd[1]: var-lib-kubelet-pods-3a36152b\x2dcd25\x2d402b\x2d8771\x2de96771121b3f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxzlg6.mount: Deactivated successfully. May 17 00:13:25.379277 systemd[1]: var-lib-kubelet-pods-3a36152b\x2dcd25\x2d402b\x2d8771\x2de96771121b3f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:13:26.336649 sshd[4161]: pam_unix(sshd:session): session closed for user core May 17 00:13:26.347289 systemd[1]: sshd@24-10.0.0.20:22-10.0.0.1:45568.service: Deactivated successfully. May 17 00:13:26.349115 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:13:26.350691 systemd-logind[1451]: Session 25 logged out. Waiting for processes to exit. May 17 00:13:26.357630 systemd[1]: Started sshd@25-10.0.0.20:22-10.0.0.1:45582.service - OpenSSH per-connection server daemon (10.0.0.1:45582). May 17 00:13:26.358499 systemd-logind[1451]: Removed session 25. May 17 00:13:26.388876 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 45582 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:13:26.390323 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:26.394419 systemd-logind[1451]: New session 26 of user core. May 17 00:13:26.401525 systemd[1]: Started session-26.scope - Session 26 of User core. May 17 00:13:26.847689 kubelet[2496]: I0517 00:13:26.847653 2496 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:13:26Z","lastTransitionTime":"2025-05-17T00:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:13:26.979875 sshd[4324]: pam_unix(sshd:session): session closed for user core May 17 00:13:26.993500 systemd[1]: sshd@25-10.0.0.20:22-10.0.0.1:45582.service: Deactivated successfully. May 17 00:13:26.995196 kubelet[2496]: E0517 00:13:26.994540 2496 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a36152b-cd25-402b-8771-e96771121b3f" containerName="mount-cgroup" May 17 00:13:26.995196 kubelet[2496]: E0517 00:13:26.994562 2496 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a36152b-cd25-402b-8771-e96771121b3f" containerName="mount-bpf-fs" May 17 00:13:26.995196 kubelet[2496]: E0517 00:13:26.994569 2496 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a36152b-cd25-402b-8771-e96771121b3f" containerName="cilium-agent" May 17 00:13:26.995196 kubelet[2496]: E0517 00:13:26.994576 2496 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a36152b-cd25-402b-8771-e96771121b3f" containerName="apply-sysctl-overwrites" May 17 00:13:26.995196 kubelet[2496]: E0517 00:13:26.994581 2496 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="646684e3-f939-4882-898c-878848df2573" containerName="cilium-operator" May 17 00:13:26.995196 kubelet[2496]: E0517 00:13:26.994587 2496 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a36152b-cd25-402b-8771-e96771121b3f" containerName="clean-cilium-state" May 17 00:13:26.995196 kubelet[2496]: I0517 00:13:26.994608 2496 memory_manager.go:354] "RemoveStaleState removing state" podUID="646684e3-f939-4882-898c-878848df2573" containerName="cilium-operator" May 17 00:13:26.995196 kubelet[2496]: I0517 00:13:26.994622 2496 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a36152b-cd25-402b-8771-e96771121b3f" containerName="cilium-agent" May 17 00:13:26.997615 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:13:27.000038 systemd-logind[1451]: Session 26 logged out. Waiting for processes to exit. May 17 00:13:27.014909 systemd[1]: Started sshd@26-10.0.0.20:22-10.0.0.1:45588.service - OpenSSH per-connection server daemon (10.0.0.1:45588). May 17 00:13:27.018561 systemd-logind[1451]: Removed session 26. May 17 00:13:27.023359 systemd[1]: Created slice kubepods-burstable-podd66bb95f_e96d_4637_b489_b96c2c920c34.slice - libcontainer container kubepods-burstable-podd66bb95f_e96d_4637_b489_b96c2c920c34.slice. May 17 00:13:27.049608 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 45588 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:13:27.051298 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:27.055281 systemd-logind[1451]: New session 27 of user core. May 17 00:13:27.062608 systemd[1]: Started session-27.scope - Session 27 of User core. May 17 00:13:27.114586 sshd[4337]: pam_unix(sshd:session): session closed for user core May 17 00:13:27.124127 systemd[1]: sshd@26-10.0.0.20:22-10.0.0.1:45588.service: Deactivated successfully. May 17 00:13:27.125868 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:13:27.127522 systemd-logind[1451]: Session 27 logged out. Waiting for processes to exit. May 17 00:13:27.141867 systemd[1]: Started sshd@27-10.0.0.20:22-10.0.0.1:45592.service - OpenSSH per-connection server daemon (10.0.0.1:45592). May 17 00:13:27.142924 systemd-logind[1451]: Removed session 27. May 17 00:13:27.169037 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 45592 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:13:27.170586 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:27.174694 systemd-logind[1451]: New session 28 of user core. May 17 00:13:27.175621 kubelet[2496]: I0517 00:13:27.175593 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d66bb95f-e96d-4637-b489-b96c2c920c34-etc-cni-netd\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175703 kubelet[2496]: I0517 00:13:27.175627 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qf9q\" (UniqueName: \"kubernetes.io/projected/d66bb95f-e96d-4637-b489-b96c2c920c34-kube-api-access-6qf9q\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175703 kubelet[2496]: I0517 00:13:27.175645 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d66bb95f-e96d-4637-b489-b96c2c920c34-cilium-run\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175703 kubelet[2496]: I0517 00:13:27.175660 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d66bb95f-e96d-4637-b489-b96c2c920c34-cilium-config-path\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175703 kubelet[2496]: I0517 00:13:27.175674 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d66bb95f-e96d-4637-b489-b96c2c920c34-bpf-maps\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175789 kubelet[2496]: I0517 00:13:27.175716 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d66bb95f-e96d-4637-b489-b96c2c920c34-xtables-lock\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175789 kubelet[2496]: I0517 00:13:27.175744 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d66bb95f-e96d-4637-b489-b96c2c920c34-cilium-cgroup\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175789 kubelet[2496]: I0517 00:13:27.175763 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d66bb95f-e96d-4637-b489-b96c2c920c34-host-proc-sys-net\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175789 kubelet[2496]: I0517 00:13:27.175781 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d66bb95f-e96d-4637-b489-b96c2c920c34-hubble-tls\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175876 kubelet[2496]: I0517 00:13:27.175800 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d66bb95f-e96d-4637-b489-b96c2c920c34-lib-modules\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175876 kubelet[2496]: I0517 00:13:27.175818 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d66bb95f-e96d-4637-b489-b96c2c920c34-clustermesh-secrets\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175876 kubelet[2496]: I0517 00:13:27.175835 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d66bb95f-e96d-4637-b489-b96c2c920c34-host-proc-sys-kernel\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175876 kubelet[2496]: I0517 00:13:27.175853 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d66bb95f-e96d-4637-b489-b96c2c920c34-cni-path\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175876 kubelet[2496]: I0517 00:13:27.175871 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d66bb95f-e96d-4637-b489-b96c2c920c34-cilium-ipsec-secrets\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.175980 kubelet[2496]: I0517 00:13:27.175889 2496 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d66bb95f-e96d-4637-b489-b96c2c920c34-hostproc\") pod \"cilium-zpj4w\" (UID: \"d66bb95f-e96d-4637-b489-b96c2c920c34\") " pod="kube-system/cilium-zpj4w" May 17 00:13:27.192539 systemd[1]: Started session-28.scope - Session 28 of User core. May 17 00:13:27.326264 kubelet[2496]: E0517 00:13:27.326212 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:27.326886 containerd[1466]: time="2025-05-17T00:13:27.326810269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zpj4w,Uid:d66bb95f-e96d-4637-b489-b96c2c920c34,Namespace:kube-system,Attempt:0,}" May 17 00:13:27.362126 containerd[1466]: time="2025-05-17T00:13:27.362036264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:13:27.362126 containerd[1466]: time="2025-05-17T00:13:27.362080067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:13:27.362126 containerd[1466]: time="2025-05-17T00:13:27.362102800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:13:27.362337 containerd[1466]: time="2025-05-17T00:13:27.362283975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:13:27.383535 systemd[1]: Started cri-containerd-a6717bfd8d39c4321ec96e4991e3aed20e2b8929c35f0a60e1b85d34fb7f58fc.scope - libcontainer container a6717bfd8d39c4321ec96e4991e3aed20e2b8929c35f0a60e1b85d34fb7f58fc. May 17 00:13:27.404386 containerd[1466]: time="2025-05-17T00:13:27.404277431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zpj4w,Uid:d66bb95f-e96d-4637-b489-b96c2c920c34,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6717bfd8d39c4321ec96e4991e3aed20e2b8929c35f0a60e1b85d34fb7f58fc\"" May 17 00:13:27.405114 kubelet[2496]: E0517 00:13:27.405085 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:27.406760 containerd[1466]: time="2025-05-17T00:13:27.406727250Z" level=info msg="CreateContainer within sandbox \"a6717bfd8d39c4321ec96e4991e3aed20e2b8929c35f0a60e1b85d34fb7f58fc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:13:27.760782 containerd[1466]: time="2025-05-17T00:13:27.760640381Z" level=info msg="CreateContainer within sandbox \"a6717bfd8d39c4321ec96e4991e3aed20e2b8929c35f0a60e1b85d34fb7f58fc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e4f2f7bbf5b2798acb148ad48230329b9f9831f51873045ac590c91c05c483ff\"" May 17 00:13:27.761330 containerd[1466]: time="2025-05-17T00:13:27.761299756Z" level=info msg="StartContainer for \"e4f2f7bbf5b2798acb148ad48230329b9f9831f51873045ac590c91c05c483ff\"" May 17 00:13:27.788617 systemd[1]: Started cri-containerd-e4f2f7bbf5b2798acb148ad48230329b9f9831f51873045ac590c91c05c483ff.scope - libcontainer container e4f2f7bbf5b2798acb148ad48230329b9f9831f51873045ac590c91c05c483ff. May 17 00:13:27.814701 containerd[1466]: time="2025-05-17T00:13:27.814664327Z" level=info msg="StartContainer for \"e4f2f7bbf5b2798acb148ad48230329b9f9831f51873045ac590c91c05c483ff\" returns successfully" May 17 00:13:27.825627 systemd[1]: cri-containerd-e4f2f7bbf5b2798acb148ad48230329b9f9831f51873045ac590c91c05c483ff.scope: Deactivated successfully. May 17 00:13:27.849372 kubelet[2496]: E0517 00:13:27.848669 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:27.860273 containerd[1466]: time="2025-05-17T00:13:27.860218495Z" level=info msg="shim disconnected" id=e4f2f7bbf5b2798acb148ad48230329b9f9831f51873045ac590c91c05c483ff namespace=k8s.io May 17 00:13:27.860273 containerd[1466]: time="2025-05-17T00:13:27.860271686Z" level=warning msg="cleaning up after shim disconnected" id=e4f2f7bbf5b2798acb148ad48230329b9f9831f51873045ac590c91c05c483ff namespace=k8s.io May 17 00:13:27.860434 containerd[1466]: time="2025-05-17T00:13:27.860280002Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:13:28.851373 kubelet[2496]: E0517 00:13:28.851328 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:28.853526 containerd[1466]: time="2025-05-17T00:13:28.853478289Z" level=info msg="CreateContainer within sandbox \"a6717bfd8d39c4321ec96e4991e3aed20e2b8929c35f0a60e1b85d34fb7f58fc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:13:28.868087 containerd[1466]: time="2025-05-17T00:13:28.868038481Z" level=info msg="CreateContainer within sandbox \"a6717bfd8d39c4321ec96e4991e3aed20e2b8929c35f0a60e1b85d34fb7f58fc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"082e9fec234fa2db5630f6b31ee12cc7760ed3d6f2f9c872e1a72b2bb8e55096\"" May 17 00:13:28.868881 containerd[1466]: time="2025-05-17T00:13:28.868838412Z" level=info msg="StartContainer for \"082e9fec234fa2db5630f6b31ee12cc7760ed3d6f2f9c872e1a72b2bb8e55096\"" May 17 00:13:28.902573 systemd[1]: Started cri-containerd-082e9fec234fa2db5630f6b31ee12cc7760ed3d6f2f9c872e1a72b2bb8e55096.scope - libcontainer container 082e9fec234fa2db5630f6b31ee12cc7760ed3d6f2f9c872e1a72b2bb8e55096. May 17 00:13:28.928558 containerd[1466]: time="2025-05-17T00:13:28.928502254Z" level=info msg="StartContainer for \"082e9fec234fa2db5630f6b31ee12cc7760ed3d6f2f9c872e1a72b2bb8e55096\" returns successfully" May 17 00:13:28.935068 systemd[1]: cri-containerd-082e9fec234fa2db5630f6b31ee12cc7760ed3d6f2f9c872e1a72b2bb8e55096.scope: Deactivated successfully. May 17 00:13:28.960032 containerd[1466]: time="2025-05-17T00:13:28.959967315Z" level=info msg="shim disconnected" id=082e9fec234fa2db5630f6b31ee12cc7760ed3d6f2f9c872e1a72b2bb8e55096 namespace=k8s.io May 17 00:13:28.960032 containerd[1466]: time="2025-05-17T00:13:28.960025065Z" level=warning msg="cleaning up after shim disconnected" id=082e9fec234fa2db5630f6b31ee12cc7760ed3d6f2f9c872e1a72b2bb8e55096 namespace=k8s.io May 17 00:13:28.960032 containerd[1466]: time="2025-05-17T00:13:28.960034071Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:13:29.284507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-082e9fec234fa2db5630f6b31ee12cc7760ed3d6f2f9c872e1a72b2bb8e55096-rootfs.mount: Deactivated successfully. May 17 00:13:29.855846 kubelet[2496]: E0517 00:13:29.855791 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:29.859840 containerd[1466]: time="2025-05-17T00:13:29.859787300Z" level=info msg="CreateContainer within sandbox \"a6717bfd8d39c4321ec96e4991e3aed20e2b8929c35f0a60e1b85d34fb7f58fc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:13:29.877413 containerd[1466]: time="2025-05-17T00:13:29.877366029Z" level=info msg="CreateContainer within sandbox \"a6717bfd8d39c4321ec96e4991e3aed20e2b8929c35f0a60e1b85d34fb7f58fc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef9f82bd811d9939b9ee29eb5fae69dc15b8c3c580462170905f56aa23e82c45\"" May 17 00:13:29.877942 containerd[1466]: time="2025-05-17T00:13:29.877885697Z" level=info msg="StartContainer for \"ef9f82bd811d9939b9ee29eb5fae69dc15b8c3c580462170905f56aa23e82c45\"" May 17 00:13:29.905535 systemd[1]: Started cri-containerd-ef9f82bd811d9939b9ee29eb5fae69dc15b8c3c580462170905f56aa23e82c45.scope - libcontainer container ef9f82bd811d9939b9ee29eb5fae69dc15b8c3c580462170905f56aa23e82c45. May 17 00:13:29.934902 containerd[1466]: time="2025-05-17T00:13:29.934859868Z" level=info msg="StartContainer for \"ef9f82bd811d9939b9ee29eb5fae69dc15b8c3c580462170905f56aa23e82c45\" returns successfully" May 17 00:13:29.936512 systemd[1]: cri-containerd-ef9f82bd811d9939b9ee29eb5fae69dc15b8c3c580462170905f56aa23e82c45.scope: Deactivated successfully. May 17 00:13:29.959698 containerd[1466]: time="2025-05-17T00:13:29.959637579Z" level=info msg="shim disconnected" id=ef9f82bd811d9939b9ee29eb5fae69dc15b8c3c580462170905f56aa23e82c45 namespace=k8s.io May 17 00:13:29.959698 containerd[1466]: time="2025-05-17T00:13:29.959693225Z" level=warning msg="cleaning up after shim disconnected" id=ef9f82bd811d9939b9ee29eb5fae69dc15b8c3c580462170905f56aa23e82c45 namespace=k8s.io May 17 00:13:29.959698 containerd[1466]: time="2025-05-17T00:13:29.959701971Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:13:30.284098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef9f82bd811d9939b9ee29eb5fae69dc15b8c3c580462170905f56aa23e82c45-rootfs.mount: Deactivated successfully. May 17 00:13:30.346275 kubelet[2496]: E0517 00:13:30.343257 2496 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:13:30.858718 kubelet[2496]: E0517 00:13:30.858687 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:30.860385 containerd[1466]: time="2025-05-17T00:13:30.860344579Z" level=info msg="CreateContainer within sandbox \"a6717bfd8d39c4321ec96e4991e3aed20e2b8929c35f0a60e1b85d34fb7f58fc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:13:30.874422 containerd[1466]: time="2025-05-17T00:13:30.874364656Z" level=info msg="CreateContainer within sandbox \"a6717bfd8d39c4321ec96e4991e3aed20e2b8929c35f0a60e1b85d34fb7f58fc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0ccc45656af2343c1f3da0c50d5de26373f1d2a3be3304086af05fa613f85d60\"" May 17 00:13:30.875429 containerd[1466]: time="2025-05-17T00:13:30.874869976Z" level=info msg="StartContainer for \"0ccc45656af2343c1f3da0c50d5de26373f1d2a3be3304086af05fa613f85d60\"" May 17 00:13:30.901538 systemd[1]: Started cri-containerd-0ccc45656af2343c1f3da0c50d5de26373f1d2a3be3304086af05fa613f85d60.scope - libcontainer container 0ccc45656af2343c1f3da0c50d5de26373f1d2a3be3304086af05fa613f85d60. May 17 00:13:30.924301 systemd[1]: cri-containerd-0ccc45656af2343c1f3da0c50d5de26373f1d2a3be3304086af05fa613f85d60.scope: Deactivated successfully. May 17 00:13:30.926000 containerd[1466]: time="2025-05-17T00:13:30.925967666Z" level=info msg="StartContainer for \"0ccc45656af2343c1f3da0c50d5de26373f1d2a3be3304086af05fa613f85d60\" returns successfully" May 17 00:13:30.947873 containerd[1466]: time="2025-05-17T00:13:30.947818762Z" level=info msg="shim disconnected" id=0ccc45656af2343c1f3da0c50d5de26373f1d2a3be3304086af05fa613f85d60 namespace=k8s.io May 17 00:13:30.947873 containerd[1466]: time="2025-05-17T00:13:30.947866232Z" level=warning msg="cleaning up after shim disconnected" id=0ccc45656af2343c1f3da0c50d5de26373f1d2a3be3304086af05fa613f85d60 namespace=k8s.io May 17 00:13:30.947873 containerd[1466]: time="2025-05-17T00:13:30.947876130Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:13:31.284219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ccc45656af2343c1f3da0c50d5de26373f1d2a3be3304086af05fa613f85d60-rootfs.mount: Deactivated successfully. May 17 00:13:31.295411 kubelet[2496]: E0517 00:13:31.295376 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:31.862015 kubelet[2496]: E0517 00:13:31.861988 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:31.863390 containerd[1466]: time="2025-05-17T00:13:31.863351786Z" level=info msg="CreateContainer within sandbox \"a6717bfd8d39c4321ec96e4991e3aed20e2b8929c35f0a60e1b85d34fb7f58fc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:13:31.889534 containerd[1466]: time="2025-05-17T00:13:31.889486373Z" level=info msg="CreateContainer within sandbox \"a6717bfd8d39c4321ec96e4991e3aed20e2b8929c35f0a60e1b85d34fb7f58fc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"19eb9c420bf28b8cdfef72d45288a5d43cf6ca9cf63307790bf719011618ae23\"" May 17 00:13:31.889995 containerd[1466]: time="2025-05-17T00:13:31.889944894Z" level=info msg="StartContainer for \"19eb9c420bf28b8cdfef72d45288a5d43cf6ca9cf63307790bf719011618ae23\"" May 17 00:13:31.916547 systemd[1]: Started cri-containerd-19eb9c420bf28b8cdfef72d45288a5d43cf6ca9cf63307790bf719011618ae23.scope - libcontainer container 19eb9c420bf28b8cdfef72d45288a5d43cf6ca9cf63307790bf719011618ae23. May 17 00:13:31.944787 containerd[1466]: time="2025-05-17T00:13:31.944739094Z" level=info msg="StartContainer for \"19eb9c420bf28b8cdfef72d45288a5d43cf6ca9cf63307790bf719011618ae23\" returns successfully" May 17 00:13:32.354433 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:13:32.866101 kubelet[2496]: E0517 00:13:32.866038 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:32.878561 kubelet[2496]: I0517 00:13:32.878506 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zpj4w" podStartSLOduration=6.878486189 podStartE2EDuration="6.878486189s" podCreationTimestamp="2025-05-17 00:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:13:32.877995869 +0000 UTC m=+87.666329029" watchObservedRunningTime="2025-05-17 00:13:32.878486189 +0000 UTC m=+87.666819329" May 17 00:13:33.867339 kubelet[2496]: E0517 00:13:33.867303 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:35.383384 systemd-networkd[1406]: lxc_health: Link UP May 17 00:13:35.390691 systemd-networkd[1406]: lxc_health: Gained carrier May 17 00:13:36.552645 systemd-networkd[1406]: lxc_health: Gained IPv6LL May 17 00:13:37.328923 kubelet[2496]: E0517 00:13:37.328888 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:37.875238 kubelet[2496]: E0517 00:13:37.875206 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:41.294946 kubelet[2496]: E0517 00:13:41.294884 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:13:41.891342 sshd[4345]: pam_unix(sshd:session): session closed for user core May 17 00:13:41.896262 systemd[1]: sshd@27-10.0.0.20:22-10.0.0.1:45592.service: Deactivated successfully. May 17 00:13:41.899305 systemd[1]: session-28.scope: Deactivated successfully. May 17 00:13:41.900088 systemd-logind[1451]: Session 28 logged out. Waiting for processes to exit. May 17 00:13:41.901119 systemd-logind[1451]: Removed session 28.