May 8 00:12:58.945092 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 00:12:58.945115 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:12:58.945124 kernel: BIOS-provided physical RAM map: May 8 00:12:58.945131 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 8 00:12:58.945136 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 8 00:12:58.945145 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 8 00:12:58.945151 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 8 00:12:58.945158 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 8 00:12:58.945164 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:12:58.945170 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 8 00:12:58.945176 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:12:58.945182 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 8 00:12:58.945187 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 8 00:12:58.945194 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:12:58.945203 kernel: NX (Execute Disable) protection: active May 8 00:12:58.945210 kernel: APIC: Static calls initialized May 8 00:12:58.945216 kernel: SMBIOS 2.8 present. May 8 00:12:58.945222 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 8 00:12:58.945229 kernel: Hypervisor detected: KVM May 8 00:12:58.945237 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:12:58.945244 kernel: kvm-clock: using sched offset of 4859492705 cycles May 8 00:12:58.945251 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:12:58.945258 kernel: tsc: Detected 1999.999 MHz processor May 8 00:12:58.945265 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:12:58.945272 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:12:58.945278 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 8 00:12:58.945285 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 8 00:12:58.945292 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:12:58.945301 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 8 00:12:58.945308 kernel: Using GB pages for direct mapping May 8 00:12:58.945314 kernel: ACPI: Early table checksum verification disabled May 8 00:12:58.945321 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 8 00:12:58.945327 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:58.945334 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:58.945341 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:58.945348 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 8 00:12:58.945354 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:58.945363 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:58.945370 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:58.945376 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:12:58.945386 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 8 00:12:58.945393 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 8 00:12:58.945400 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 8 00:12:58.945407 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 8 00:12:58.945416 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 8 00:12:58.945423 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 8 00:12:58.945430 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 8 00:12:58.945437 kernel: No NUMA configuration found May 8 00:12:58.945444 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 8 00:12:58.945450 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] May 8 00:12:58.945457 kernel: Zone ranges: May 8 00:12:58.945464 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:12:58.945474 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 8 00:12:58.945480 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 8 00:12:58.945487 kernel: Movable zone start for each node May 8 00:12:58.945494 kernel: Early memory node ranges May 8 00:12:58.945501 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 8 00:12:58.945508 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 8 00:12:58.945515 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 8 00:12:58.945522 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 8 00:12:58.945528 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:12:58.945537 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 00:12:58.945544 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 8 00:12:58.945551 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:12:58.945558 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:12:58.945565 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:12:58.945572 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:12:58.945579 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:12:58.945585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:12:58.945592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:12:58.945601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:12:58.945670 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:12:58.945678 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:12:58.945685 kernel: TSC deadline timer available May 8 00:12:58.945692 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 8 00:12:58.945699 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:12:58.945706 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:12:58.945713 kernel: kvm-guest: setup PV sched yield May 8 00:12:58.945720 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 8 00:12:58.945729 kernel: Booting paravirtualized kernel on KVM May 8 00:12:58.945737 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:12:58.945744 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 8 00:12:58.945751 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 May 8 00:12:58.945758 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 May 8 00:12:58.945764 kernel: pcpu-alloc: [0] 0 1 May 8 00:12:58.945771 kernel: kvm-guest: PV spinlocks enabled May 8 00:12:58.945778 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:12:58.945786 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:12:58.945796 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:12:58.945803 kernel: random: crng init done May 8 00:12:58.945810 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:12:58.945817 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:12:58.945824 kernel: Fallback order for Node 0: 0 May 8 00:12:58.945831 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 8 00:12:58.945838 kernel: Policy zone: Normal May 8 00:12:58.945844 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:12:58.945854 kernel: software IO TLB: area num 2. May 8 00:12:58.945861 kernel: Memory: 3964164K/4193772K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 229348K reserved, 0K cma-reserved) May 8 00:12:58.945868 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 8 00:12:58.945875 kernel: ftrace: allocating 37918 entries in 149 pages May 8 00:12:58.945882 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:12:58.945889 kernel: Dynamic Preempt: voluntary May 8 00:12:58.945896 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:12:58.945903 kernel: rcu: RCU event tracing is enabled. May 8 00:12:58.945911 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 8 00:12:58.945920 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:12:58.945927 kernel: Rude variant of Tasks RCU enabled. May 8 00:12:58.945934 kernel: Tracing variant of Tasks RCU enabled. May 8 00:12:58.945941 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:12:58.945948 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 8 00:12:58.945955 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 8 00:12:58.945962 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:12:58.945969 kernel: Console: colour VGA+ 80x25 May 8 00:12:58.945975 kernel: printk: console [tty0] enabled May 8 00:12:58.945983 kernel: printk: console [ttyS0] enabled May 8 00:12:58.945992 kernel: ACPI: Core revision 20230628 May 8 00:12:58.945999 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:12:58.946007 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:12:58.946021 kernel: x2apic enabled May 8 00:12:58.946031 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:12:58.946038 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:12:58.946045 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:12:58.946053 kernel: kvm-guest: setup PV IPIs May 8 00:12:58.946060 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:12:58.946067 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:12:58.946074 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) May 8 00:12:58.946084 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:12:58.946091 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:12:58.946099 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:12:58.946106 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:12:58.946113 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:12:58.946123 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:12:58.946130 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:12:58.946138 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 8 00:12:58.946145 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:12:58.946152 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:12:58.946159 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:12:58.946167 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:12:58.946175 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:12:58.946185 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:12:58.946192 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:12:58.946199 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:12:58.946206 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 8 00:12:58.946214 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:12:58.946221 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 8 00:12:58.946228 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 8 00:12:58.946235 kernel: Freeing SMP alternatives memory: 32K May 8 00:12:58.946243 kernel: pid_max: default: 32768 minimum: 301 May 8 00:12:58.946252 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:12:58.946259 kernel: landlock: Up and running. May 8 00:12:58.946266 kernel: SELinux: Initializing. May 8 00:12:58.946273 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:12:58.946281 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:12:58.946288 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 8 00:12:58.946295 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:12:58.946303 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:12:58.946310 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:12:58.946319 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:12:58.946327 kernel: ... version: 0 May 8 00:12:58.946334 kernel: ... bit width: 48 May 8 00:12:58.946341 kernel: ... generic registers: 6 May 8 00:12:58.946348 kernel: ... value mask: 0000ffffffffffff May 8 00:12:58.946355 kernel: ... max period: 00007fffffffffff May 8 00:12:58.946362 kernel: ... fixed-purpose events: 0 May 8 00:12:58.946369 kernel: ... event mask: 000000000000003f May 8 00:12:58.946376 kernel: signal: max sigframe size: 3376 May 8 00:12:58.946386 kernel: rcu: Hierarchical SRCU implementation. May 8 00:12:58.946393 kernel: rcu: Max phase no-delay instances is 400. May 8 00:12:58.946400 kernel: smp: Bringing up secondary CPUs ... May 8 00:12:58.946407 kernel: smpboot: x86: Booting SMP configuration: May 8 00:12:58.946415 kernel: .... node #0, CPUs: #1 May 8 00:12:58.946422 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:12:58.946429 kernel: smpboot: Max logical packages: 1 May 8 00:12:58.946436 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) May 8 00:12:58.946443 kernel: devtmpfs: initialized May 8 00:12:58.946450 kernel: x86/mm: Memory block size: 128MB May 8 00:12:58.946460 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:12:58.946468 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 8 00:12:58.946475 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:12:58.946482 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:12:58.946489 kernel: audit: initializing netlink subsys (disabled) May 8 00:12:58.946496 kernel: audit: type=2000 audit(1746663178.732:1): state=initialized audit_enabled=0 res=1 May 8 00:12:58.946503 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:12:58.946511 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:12:58.946520 kernel: cpuidle: using governor menu May 8 00:12:58.946527 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:12:58.946534 kernel: dca service started, version 1.12.1 May 8 00:12:58.946542 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:12:58.946549 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 8 00:12:58.946556 kernel: PCI: Using configuration type 1 for base access May 8 00:12:58.946564 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:12:58.946571 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:12:58.946578 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:12:58.946588 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:12:58.946595 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:12:58.946602 kernel: ACPI: Added _OSI(Module Device) May 8 00:12:58.946635 kernel: ACPI: Added _OSI(Processor Device) May 8 00:12:58.946643 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:12:58.946650 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:12:58.946657 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:12:58.946664 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:12:58.946672 kernel: ACPI: Interpreter enabled May 8 00:12:58.946682 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:12:58.946689 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:12:58.946696 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:12:58.946703 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:12:58.946710 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:12:58.946716 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:12:58.946918 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:12:58.947047 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:12:58.947167 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:12:58.947177 kernel: PCI host bridge to bus 0000:00 May 8 00:12:58.947298 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:12:58.947405 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:12:58.947510 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:12:58.947704 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 8 00:12:58.947816 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:12:58.947929 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 8 00:12:58.948035 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:12:58.948170 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:12:58.948297 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:12:58.948414 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 8 00:12:58.948529 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 8 00:12:58.951627 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 8 00:12:58.951770 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:12:58.952103 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 May 8 00:12:58.952219 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] May 8 00:12:58.952366 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 8 00:12:58.952484 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 8 00:12:58.953760 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 8 00:12:58.953899 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 8 00:12:58.954016 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 8 00:12:58.954134 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 8 00:12:58.954247 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 8 00:12:58.954374 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:12:58.954505 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:12:58.954726 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:12:58.954905 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] May 8 00:12:58.955357 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] May 8 00:12:58.955518 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:12:58.955791 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 8 00:12:58.955805 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:12:58.956082 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:12:58.956092 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:12:58.956103 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:12:58.956111 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:12:58.956118 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:12:58.956125 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:12:58.956132 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:12:58.956140 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:12:58.956147 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:12:58.956154 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:12:58.956161 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:12:58.956171 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:12:58.956178 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:12:58.956185 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:12:58.956192 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:12:58.956199 kernel: iommu: Default domain type: Translated May 8 00:12:58.956206 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:12:58.956214 kernel: PCI: Using ACPI for IRQ routing May 8 00:12:58.956221 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:12:58.956228 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 8 00:12:58.956238 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 8 00:12:58.956369 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:12:58.956492 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:12:58.956697 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:12:58.956710 kernel: vgaarb: loaded May 8 00:12:58.956717 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:12:58.956725 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:12:58.956732 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:12:58.956739 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:12:58.956751 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:12:58.956758 kernel: pnp: PnP ACPI init May 8 00:12:58.956895 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:12:58.956907 kernel: pnp: PnP ACPI: found 5 devices May 8 00:12:58.956914 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:12:58.956922 kernel: NET: Registered PF_INET protocol family May 8 00:12:58.956929 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:12:58.956936 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:12:58.956947 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:12:58.956954 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:12:58.956961 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:12:58.956969 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:12:58.956975 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:12:58.956983 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:12:58.956990 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:12:58.956997 kernel: NET: Registered PF_XDP protocol family May 8 00:12:58.957109 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:12:58.957294 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:12:58.957469 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:12:58.957804 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 8 00:12:58.958108 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:12:58.958219 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 8 00:12:58.958228 kernel: PCI: CLS 0 bytes, default 64 May 8 00:12:58.958236 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 8 00:12:58.958243 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 8 00:12:58.958255 kernel: Initialise system trusted keyrings May 8 00:12:58.958262 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:12:58.958269 kernel: Key type asymmetric registered May 8 00:12:58.958276 kernel: Asymmetric key parser 'x509' registered May 8 00:12:58.958283 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:12:58.958290 kernel: io scheduler mq-deadline registered May 8 00:12:58.958297 kernel: io scheduler kyber registered May 8 00:12:58.958304 kernel: io scheduler bfq registered May 8 00:12:58.958311 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:12:58.958322 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:12:58.958329 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:12:58.958336 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:12:58.958343 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:12:58.958350 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:12:58.958357 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:12:58.958364 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:12:58.958371 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:12:58.958492 kernel: rtc_cmos 00:03: RTC can wake from S4 May 8 00:12:58.958605 kernel: rtc_cmos 00:03: registered as rtc0 May 8 00:12:58.958767 kernel: rtc_cmos 00:03: setting system clock to 2025-05-08T00:12:58 UTC (1746663178) May 8 00:12:58.958876 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:12:58.958886 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:12:58.958894 kernel: NET: Registered PF_INET6 protocol family May 8 00:12:58.958901 kernel: Segment Routing with IPv6 May 8 00:12:58.958908 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:12:58.958915 kernel: NET: Registered PF_PACKET protocol family May 8 00:12:58.958927 kernel: Key type dns_resolver registered May 8 00:12:58.958934 kernel: IPI shorthand broadcast: enabled May 8 00:12:58.958942 kernel: sched_clock: Marking stable (726008480, 211578861)->(1008199234, -70611893) May 8 00:12:58.958949 kernel: registered taskstats version 1 May 8 00:12:58.958956 kernel: Loading compiled-in X.509 certificates May 8 00:12:58.958963 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 00:12:58.958970 kernel: Key type .fscrypt registered May 8 00:12:58.958977 kernel: Key type fscrypt-provisioning registered May 8 00:12:58.958985 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:12:58.958995 kernel: ima: Allocated hash algorithm: sha1 May 8 00:12:58.959002 kernel: ima: No architecture policies found May 8 00:12:58.959009 kernel: clk: Disabling unused clocks May 8 00:12:58.959016 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 00:12:58.959023 kernel: Write protecting the kernel read-only data: 38912k May 8 00:12:58.959031 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 00:12:58.959038 kernel: Run /init as init process May 8 00:12:58.959045 kernel: with arguments: May 8 00:12:58.959052 kernel: /init May 8 00:12:58.959062 kernel: with environment: May 8 00:12:58.959068 kernel: HOME=/ May 8 00:12:58.959076 kernel: TERM=linux May 8 00:12:58.959083 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:12:58.959091 systemd[1]: Successfully made /usr/ read-only. May 8 00:12:58.959101 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:12:58.959110 systemd[1]: Detected virtualization kvm. May 8 00:12:58.959120 systemd[1]: Detected architecture x86-64. May 8 00:12:58.959127 systemd[1]: Running in initrd. May 8 00:12:58.959168 systemd[1]: No hostname configured, using default hostname. May 8 00:12:58.959196 systemd[1]: Hostname set to . May 8 00:12:58.959204 systemd[1]: Initializing machine ID from random generator. May 8 00:12:58.959226 systemd[1]: Queued start job for default target initrd.target. May 8 00:12:58.959239 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:12:58.959248 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:12:58.959256 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:12:58.959265 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:12:58.959278 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:12:58.959286 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:12:58.959295 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:12:58.959306 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:12:58.959314 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:12:58.959322 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:12:58.959330 systemd[1]: Reached target paths.target - Path Units. May 8 00:12:58.959337 systemd[1]: Reached target slices.target - Slice Units. May 8 00:12:58.959345 systemd[1]: Reached target swap.target - Swaps. May 8 00:12:58.959353 systemd[1]: Reached target timers.target - Timer Units. May 8 00:12:58.959361 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:12:58.959369 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:12:58.959379 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:12:58.959387 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:12:58.959395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:12:58.959403 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:12:58.959411 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:12:58.959418 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:12:58.959426 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:12:58.959434 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:12:58.959444 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:12:58.959452 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:12:58.959460 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:12:58.959468 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:12:58.959476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:12:58.959508 systemd-journald[178]: Collecting audit messages is disabled. May 8 00:12:58.959531 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:12:58.959540 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:12:58.959551 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:12:58.959559 systemd-journald[178]: Journal started May 8 00:12:58.959578 systemd-journald[178]: Runtime Journal (/run/log/journal/58418e180c1f4fb997841c342e5c980f) is 8M, max 78.3M, 70.3M free. May 8 00:12:58.957892 systemd-modules-load[179]: Inserted module 'overlay' May 8 00:12:58.963424 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:12:58.972302 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:12:59.030627 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:12:59.030651 kernel: Bridge firewalling registered May 8 00:12:58.981730 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:12:58.989410 systemd-modules-load[179]: Inserted module 'br_netfilter' May 8 00:12:59.038048 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:12:59.038910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:12:59.051999 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:12:59.055178 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:12:59.057157 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:12:59.080995 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:12:59.089198 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:12:59.094713 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:12:59.103025 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:12:59.105049 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:12:59.116996 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:12:59.118974 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:12:59.121458 dracut-cmdline[207]: dracut-dracut-053 May 8 00:12:59.127516 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:12:59.159359 systemd-resolved[212]: Positive Trust Anchors: May 8 00:12:59.160098 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:12:59.160126 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:12:59.165626 systemd-resolved[212]: Defaulting to hostname 'linux'. May 8 00:12:59.166717 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:12:59.167750 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:12:59.208653 kernel: SCSI subsystem initialized May 8 00:12:59.217636 kernel: Loading iSCSI transport class v2.0-870. May 8 00:12:59.229645 kernel: iscsi: registered transport (tcp) May 8 00:12:59.252158 kernel: iscsi: registered transport (qla4xxx) May 8 00:12:59.252234 kernel: QLogic iSCSI HBA Driver May 8 00:12:59.316021 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:12:59.321785 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:12:59.350832 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:12:59.350896 kernel: device-mapper: uevent: version 1.0.3 May 8 00:12:59.353254 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:12:59.396642 kernel: raid6: avx2x4 gen() 26296 MB/s May 8 00:12:59.414645 kernel: raid6: avx2x2 gen() 25489 MB/s May 8 00:12:59.432994 kernel: raid6: avx2x1 gen() 17207 MB/s May 8 00:12:59.433026 kernel: raid6: using algorithm avx2x4 gen() 26296 MB/s May 8 00:12:59.451976 kernel: raid6: .... xor() 3055 MB/s, rmw enabled May 8 00:12:59.452024 kernel: raid6: using avx2x2 recovery algorithm May 8 00:12:59.471650 kernel: xor: automatically using best checksumming function avx May 8 00:12:59.598659 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:12:59.617723 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:12:59.623799 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:12:59.665576 systemd-udevd[397]: Using default interface naming scheme 'v255'. May 8 00:12:59.673344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:12:59.682772 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:12:59.703527 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation May 8 00:12:59.749322 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:12:59.755937 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:12:59.832010 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:12:59.843810 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:12:59.866105 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:12:59.869456 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:12:59.870501 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:12:59.871205 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:12:59.879807 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:12:59.904233 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:12:59.922646 kernel: scsi host0: Virtio SCSI HBA May 8 00:12:59.927631 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 8 00:12:59.927706 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:12:59.943683 kernel: libata version 3.00 loaded. May 8 00:12:59.960861 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:12:59.960903 kernel: AES CTR mode by8 optimization enabled May 8 00:12:59.971629 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:13:00.181845 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:13:00.181870 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:13:00.182032 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:13:00.182171 kernel: scsi host1: ahci May 8 00:13:00.182326 kernel: scsi host2: ahci May 8 00:13:00.182469 kernel: scsi host3: ahci May 8 00:13:00.182645 kernel: scsi host4: ahci May 8 00:13:00.182796 kernel: scsi host5: ahci May 8 00:13:00.182937 kernel: scsi host6: ahci May 8 00:13:00.183072 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 May 8 00:13:00.183083 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 May 8 00:13:00.183094 kernel: sd 0:0:0:0: Power-on or device reset occurred May 8 00:13:00.203233 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 May 8 00:13:00.203253 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 8 00:13:00.203410 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 May 8 00:13:00.203422 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 00:13:00.203565 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 May 8 00:13:00.203577 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 8 00:13:00.203824 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 May 8 00:13:00.203836 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 8 00:13:00.203976 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:13:00.203992 kernel: GPT:9289727 != 167739391 May 8 00:13:00.204002 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:13:00.204012 kernel: GPT:9289727 != 167739391 May 8 00:13:00.204021 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:13:00.204031 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:13:00.204041 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 00:12:59.988310 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:12:59.988444 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:13:00.113900 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:13:00.114500 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:13:00.114737 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:13:00.115430 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:13:00.125638 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:13:00.152911 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:13:00.257543 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:13:00.276994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:13:00.303457 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:13:00.484644 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:13:00.492624 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:13:00.492654 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:13:00.494649 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:13:00.497626 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 8 00:13:00.504369 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:13:00.553654 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/sda3 scanned by (udev-worker) (450) May 8 00:13:00.558669 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (463) May 8 00:13:00.564298 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 8 00:13:00.574671 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 8 00:13:00.594109 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 8 00:13:00.595390 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 8 00:13:00.605445 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 8 00:13:00.618789 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:13:00.625690 disk-uuid[568]: Primary Header is updated. May 8 00:13:00.625690 disk-uuid[568]: Secondary Entries is updated. May 8 00:13:00.625690 disk-uuid[568]: Secondary Header is updated. May 8 00:13:00.631722 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:13:00.638646 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:13:01.642544 disk-uuid[569]: The operation has completed successfully. May 8 00:13:01.645738 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:13:01.694021 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:13:01.694152 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:13:01.737784 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:13:01.742242 sh[583]: Success May 8 00:13:01.756708 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:13:01.816420 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:13:01.839230 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:13:01.840909 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:13:01.873120 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 00:13:01.873193 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:13:01.875176 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:13:01.878801 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:13:01.878833 kernel: BTRFS info (device dm-0): using free space tree May 8 00:13:01.888626 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:13:01.890648 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:13:01.892906 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:13:01.898822 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:13:01.901274 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:13:01.928295 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:13:01.928372 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:13:01.931395 kernel: BTRFS info (device sda6): using free space tree May 8 00:13:01.937601 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:13:01.937642 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:13:01.944635 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:13:01.948404 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:13:01.956844 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:13:02.029749 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:13:02.033376 ignition[697]: Ignition 2.20.0 May 8 00:13:02.033400 ignition[697]: Stage: fetch-offline May 8 00:13:02.033436 ignition[697]: no configs at "/usr/lib/ignition/base.d" May 8 00:13:02.033445 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:13:02.033520 ignition[697]: parsed url from cmdline: "" May 8 00:13:02.033524 ignition[697]: no config URL provided May 8 00:13:02.033529 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:13:02.033538 ignition[697]: no config at "/usr/lib/ignition/user.ign" May 8 00:13:02.033542 ignition[697]: failed to fetch config: resource requires networking May 8 00:13:02.038780 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:13:02.033732 ignition[697]: Ignition finished successfully May 8 00:13:02.039669 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:13:02.066507 systemd-networkd[767]: lo: Link UP May 8 00:13:02.066520 systemd-networkd[767]: lo: Gained carrier May 8 00:13:02.068197 systemd-networkd[767]: Enumeration completed May 8 00:13:02.068278 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:13:02.068944 systemd[1]: Reached target network.target - Network. May 8 00:13:02.069291 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:13:02.069295 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:13:02.070869 systemd-networkd[767]: eth0: Link UP May 8 00:13:02.070873 systemd-networkd[767]: eth0: Gained carrier May 8 00:13:02.070880 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:13:02.077831 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 8 00:13:02.090727 ignition[771]: Ignition 2.20.0 May 8 00:13:02.090740 ignition[771]: Stage: fetch May 8 00:13:02.090885 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 8 00:13:02.090895 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:13:02.091162 ignition[771]: parsed url from cmdline: "" May 8 00:13:02.091165 ignition[771]: no config URL provided May 8 00:13:02.091170 ignition[771]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:13:02.091179 ignition[771]: no config at "/usr/lib/ignition/user.ign" May 8 00:13:02.091201 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #1 May 8 00:13:02.091341 ignition[771]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 8 00:13:02.291532 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #2 May 8 00:13:02.291694 ignition[771]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 8 00:13:02.692035 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #3 May 8 00:13:02.692240 ignition[771]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 8 00:13:03.193670 systemd-networkd[767]: eth0: DHCPv4 address 172.232.9.214/24, gateway 172.232.9.1 acquired from 23.34.57.43 May 8 00:13:03.492873 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #4 May 8 00:13:03.579631 ignition[771]: PUT result: OK May 8 00:13:03.579686 ignition[771]: GET http://169.254.169.254/v1/user-data: attempt #1 May 8 00:13:03.687749 ignition[771]: GET result: OK May 8 00:13:03.687875 ignition[771]: parsing config with SHA512: 5c466cc9fcb9c40cd29f936c3691f2ab638c4feda77abdb82da34f0799a16f48cad85699fe14f0feb0a6f31cb8fcad6345c27074d5f92d2f4ca6bcf7d04b9419 May 8 00:13:03.699787 unknown[771]: fetched base config from "system" May 8 00:13:03.700321 ignition[771]: fetch: fetch complete May 8 00:13:03.699817 unknown[771]: fetched base config from "system" May 8 00:13:03.700327 ignition[771]: fetch: fetch passed May 8 00:13:03.699824 unknown[771]: fetched user config from "akamai" May 8 00:13:03.700388 ignition[771]: Ignition finished successfully May 8 00:13:03.702856 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 8 00:13:03.709740 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:13:03.727755 ignition[778]: Ignition 2.20.0 May 8 00:13:03.727767 ignition[778]: Stage: kargs May 8 00:13:03.727903 ignition[778]: no configs at "/usr/lib/ignition/base.d" May 8 00:13:03.727915 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:13:03.728946 ignition[778]: kargs: kargs passed May 8 00:13:03.731480 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:13:03.729190 ignition[778]: Ignition finished successfully May 8 00:13:03.736784 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:13:03.750969 ignition[785]: Ignition 2.20.0 May 8 00:13:03.750984 ignition[785]: Stage: disks May 8 00:13:03.751175 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 8 00:13:03.751191 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:13:03.755930 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:13:03.752433 ignition[785]: disks: disks passed May 8 00:13:03.779055 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:13:03.752484 ignition[785]: Ignition finished successfully May 8 00:13:03.779760 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:13:03.780480 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:13:03.781212 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:13:03.782453 systemd[1]: Reached target basic.target - Basic System. May 8 00:13:03.791733 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:13:03.809582 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:13:03.813047 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:13:03.816788 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:13:03.892780 systemd-networkd[767]: eth0: Gained IPv6LL May 8 00:13:03.900639 kernel: EXT4-fs (sda9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 00:13:03.900475 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:13:03.901742 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:13:03.916712 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:13:03.919281 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:13:03.920702 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:13:03.920749 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:13:03.920773 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:13:03.928191 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:13:03.938838 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (801) May 8 00:13:03.938870 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:13:03.938883 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:13:03.938893 kernel: BTRFS info (device sda6): using free space tree May 8 00:13:03.938730 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:13:03.946021 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:13:03.946058 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:13:03.947790 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:13:03.992792 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:13:03.998690 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 8 00:13:04.004623 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:13:04.009799 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:13:04.119264 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:13:04.126702 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:13:04.129779 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:13:04.135812 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:13:04.140643 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:13:04.163434 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:13:04.166979 ignition[914]: INFO : Ignition 2.20.0 May 8 00:13:04.168711 ignition[914]: INFO : Stage: mount May 8 00:13:04.168711 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:13:04.168711 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:13:04.170830 ignition[914]: INFO : mount: mount passed May 8 00:13:04.170830 ignition[914]: INFO : Ignition finished successfully May 8 00:13:04.171736 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:13:04.176735 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:13:04.905742 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:13:04.920660 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (927) May 8 00:13:04.924014 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:13:04.924057 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:13:04.926158 kernel: BTRFS info (device sda6): using free space tree May 8 00:13:04.934972 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:13:04.934997 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:13:04.939328 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:13:04.961621 ignition[944]: INFO : Ignition 2.20.0 May 8 00:13:04.961621 ignition[944]: INFO : Stage: files May 8 00:13:04.963086 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:13:04.963086 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:13:04.963086 ignition[944]: DEBUG : files: compiled without relabeling support, skipping May 8 00:13:04.965531 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:13:04.965531 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:13:04.967450 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:13:04.967450 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:13:04.969653 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:13:04.969653 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:13:04.969653 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 8 00:13:04.967899 unknown[944]: wrote ssh authorized keys file for user: core May 8 00:13:05.265729 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:13:05.462693 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:13:05.463969 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:13:05.463969 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 00:13:05.763394 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:13:05.856313 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:13:05.857400 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:13:05.857400 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:13:05.857400 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:13:05.857400 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:13:05.857400 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:13:05.857400 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:13:05.857400 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:13:05.857400 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:13:05.857400 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:13:05.865843 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:13:05.865843 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:13:05.865843 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:13:05.865843 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:13:05.865843 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 8 00:13:06.063104 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:13:06.408878 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:13:06.408878 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:13:06.411342 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:13:06.412356 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:13:06.412356 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:13:06.412356 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:13:06.412356 ignition[944]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 8 00:13:06.412356 ignition[944]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 8 00:13:06.412356 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:13:06.412356 ignition[944]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 8 00:13:06.412356 ignition[944]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:13:06.412356 ignition[944]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:13:06.412356 ignition[944]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:13:06.412356 ignition[944]: INFO : files: files passed May 8 00:13:06.412356 ignition[944]: INFO : Ignition finished successfully May 8 00:13:06.415656 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:13:06.425934 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:13:06.429993 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:13:06.432133 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:13:06.432248 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:13:06.444733 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:13:06.444733 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:13:06.448147 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:13:06.451119 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:13:06.452301 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:13:06.459024 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:13:06.496015 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:13:06.496169 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:13:06.497926 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:13:06.498806 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:13:06.499505 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:13:06.502585 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:13:06.519112 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:13:06.524790 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:13:06.535177 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:13:06.536508 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:13:06.537925 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:13:06.539252 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:13:06.539364 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:13:06.541271 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:13:06.542215 systemd[1]: Stopped target basic.target - Basic System. May 8 00:13:06.543417 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:13:06.544470 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:13:06.545728 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:13:06.547005 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:13:06.548214 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:13:06.549436 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:13:06.550682 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:13:06.551827 systemd[1]: Stopped target swap.target - Swaps. May 8 00:13:06.552795 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:13:06.552900 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:13:06.554199 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:13:06.555003 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:13:06.556142 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:13:06.556494 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:13:06.557406 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:13:06.557503 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:13:06.559301 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:13:06.559409 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:13:06.560428 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:13:06.560548 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:13:06.569153 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:13:06.569728 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:13:06.569875 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:13:06.573807 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:13:06.575477 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:13:06.575668 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:13:06.576743 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:13:06.577757 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:13:06.587757 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:13:06.588523 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:13:06.595661 ignition[996]: INFO : Ignition 2.20.0 May 8 00:13:06.595661 ignition[996]: INFO : Stage: umount May 8 00:13:06.595661 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:13:06.595661 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:13:06.600314 ignition[996]: INFO : umount: umount passed May 8 00:13:06.600314 ignition[996]: INFO : Ignition finished successfully May 8 00:13:06.602058 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:13:06.602451 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:13:06.604392 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:13:06.604476 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:13:06.629533 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:13:06.629605 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:13:06.630737 systemd[1]: ignition-fetch.service: Deactivated successfully. May 8 00:13:06.630798 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 8 00:13:06.632002 systemd[1]: Stopped target network.target - Network. May 8 00:13:06.633031 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:13:06.633083 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:13:06.634357 systemd[1]: Stopped target paths.target - Path Units. May 8 00:13:06.635575 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:13:06.639693 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:13:06.640299 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:13:06.641721 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:13:06.643002 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:13:06.643046 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:13:06.644228 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:13:06.644267 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:13:06.645511 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:13:06.645563 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:13:06.646543 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:13:06.646590 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:13:06.647900 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:13:06.649400 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:13:06.651867 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:13:06.652725 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:13:06.652848 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:13:06.654033 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:13:06.654112 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:13:06.656229 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:13:06.656354 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:13:06.659361 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:13:06.659818 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:13:06.659973 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:13:06.664357 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:13:06.665921 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:13:06.665966 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:13:06.675742 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:13:06.676998 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:13:06.677072 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:13:06.677760 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:13:06.677810 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:13:06.679520 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:13:06.679570 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:13:06.680416 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:13:06.680464 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:13:06.682439 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:13:06.685807 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:13:06.686086 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:13:06.697158 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:13:06.697871 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:13:06.700313 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:13:06.700515 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:13:06.702237 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:13:06.702314 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:13:06.703794 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:13:06.703831 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:13:06.704938 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:13:06.704990 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:13:06.706918 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:13:06.706968 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:13:06.708428 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:13:06.708478 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:13:06.714783 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:13:06.715376 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:13:06.715432 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:13:06.719703 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:13:06.719762 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:13:06.722556 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:13:06.722646 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:13:06.725158 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:13:06.725280 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:13:06.727495 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:13:06.735738 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:13:06.743730 systemd[1]: Switching root. May 8 00:13:06.776393 systemd-journald[178]: Journal stopped May 8 00:13:07.950038 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). May 8 00:13:07.950101 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:13:07.950115 kernel: SELinux: policy capability open_perms=1 May 8 00:13:07.950313 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:13:07.950327 kernel: SELinux: policy capability always_check_network=0 May 8 00:13:07.950342 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:13:07.950351 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:13:07.950361 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:13:07.950369 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:13:07.950379 kernel: audit: type=1403 audit(1746663186.905:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:13:07.950389 systemd[1]: Successfully loaded SELinux policy in 51.141ms. May 8 00:13:07.950402 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.399ms. May 8 00:13:07.950442 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:13:07.950455 systemd[1]: Detected virtualization kvm. May 8 00:13:07.952643 systemd[1]: Detected architecture x86-64. May 8 00:13:07.952663 systemd[1]: Detected first boot. May 8 00:13:07.952679 systemd[1]: Initializing machine ID from random generator. May 8 00:13:07.952690 zram_generator::config[1044]: No configuration found. May 8 00:13:07.952701 kernel: Guest personality initialized and is inactive May 8 00:13:07.952711 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:13:07.952720 kernel: Initialized host personality May 8 00:13:07.952729 kernel: NET: Registered PF_VSOCK protocol family May 8 00:13:07.952739 systemd[1]: Populated /etc with preset unit settings. May 8 00:13:07.952753 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:13:07.952764 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:13:07.952774 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:13:07.952784 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:13:07.952794 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:13:07.952804 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:13:07.952815 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:13:07.952827 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:13:07.952837 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:13:07.952847 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:13:07.952858 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:13:07.952868 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:13:07.952878 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:13:07.952890 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:13:07.952900 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:13:07.952911 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:13:07.952923 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:13:07.952937 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:13:07.952947 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:13:07.952958 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:13:07.952968 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:13:07.952979 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:13:07.952989 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:13:07.953002 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:13:07.953012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:13:07.953023 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:13:07.953033 systemd[1]: Reached target slices.target - Slice Units. May 8 00:13:07.953043 systemd[1]: Reached target swap.target - Swaps. May 8 00:13:07.953054 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:13:07.953064 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:13:07.953075 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:13:07.953085 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:13:07.953098 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:13:07.953109 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:13:07.953120 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:13:07.953130 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:13:07.953143 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:13:07.953154 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:13:07.953164 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:13:07.953175 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:13:07.953185 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:13:07.953196 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:13:07.953207 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:13:07.953217 systemd[1]: Reached target machines.target - Containers. May 8 00:13:07.953230 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:13:07.953241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:13:07.953251 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:13:07.953262 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:13:07.953272 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:13:07.953283 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:13:07.953293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:13:07.953304 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:13:07.953315 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:13:07.953328 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:13:07.953339 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:13:07.953350 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:13:07.953361 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:13:07.953371 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:13:07.953383 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:13:07.953393 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:13:07.953404 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:13:07.953417 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:13:07.953428 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:13:07.953439 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:13:07.953449 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:13:07.953481 systemd-journald[1128]: Collecting audit messages is disabled. May 8 00:13:07.953506 kernel: fuse: init (API version 7.39) May 8 00:13:07.953517 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:13:07.953528 kernel: loop: module loaded May 8 00:13:07.953538 systemd[1]: Stopped verity-setup.service. May 8 00:13:07.953549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:13:07.953560 kernel: ACPI: bus type drm_connector registered May 8 00:13:07.953570 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:13:07.953583 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:13:07.953594 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:13:07.953622 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:13:07.953635 systemd-journald[1128]: Journal started May 8 00:13:07.953657 systemd-journald[1128]: Runtime Journal (/run/log/journal/16d0910e883941348b8b88791243dd9d) is 8M, max 78.3M, 70.3M free. May 8 00:13:07.589253 systemd[1]: Queued start job for default target multi-user.target. May 8 00:13:07.601970 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 8 00:13:07.602957 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:13:07.956788 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:13:07.956315 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:13:07.957940 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:13:07.959041 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:13:07.961135 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:13:07.962414 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:13:07.963844 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:13:07.964760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:13:07.964984 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:13:07.966351 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:13:07.967748 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:13:07.968572 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:13:07.969958 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:13:07.971012 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:13:07.971690 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:13:07.973168 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:13:07.974254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:13:07.976207 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:13:07.977794 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:13:07.979522 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:13:07.982097 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:13:08.000038 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:13:08.008041 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:13:08.014988 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:13:08.016344 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:13:08.016421 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:13:08.018406 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:13:08.027575 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:13:08.034280 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:13:08.035121 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:13:08.042365 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:13:08.047792 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:13:08.049742 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:13:08.056733 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:13:08.057812 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:13:08.059850 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:13:08.064958 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:13:08.070776 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:13:08.084621 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:13:08.086515 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:13:08.087195 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:13:08.089579 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:13:08.092454 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:13:08.112588 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:13:08.120370 systemd-journald[1128]: Time spent on flushing to /var/log/journal/16d0910e883941348b8b88791243dd9d is 26.789ms for 998 entries. May 8 00:13:08.120370 systemd-journald[1128]: System Journal (/var/log/journal/16d0910e883941348b8b88791243dd9d) is 8M, max 195.6M, 187.6M free. May 8 00:13:08.169401 systemd-journald[1128]: Received client request to flush runtime journal. May 8 00:13:08.125766 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:13:08.138896 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:13:08.158491 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:13:08.171469 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:13:08.182431 kernel: loop0: detected capacity change from 0 to 147912 May 8 00:13:08.186152 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:13:08.191709 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:13:08.201954 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:13:08.206529 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:13:08.228626 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:13:08.246223 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. May 8 00:13:08.246246 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. May 8 00:13:08.254181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:13:08.255725 kernel: loop1: detected capacity change from 0 to 8 May 8 00:13:08.297784 kernel: loop2: detected capacity change from 0 to 138176 May 8 00:13:08.347671 kernel: loop3: detected capacity change from 0 to 218376 May 8 00:13:08.407592 kernel: loop4: detected capacity change from 0 to 147912 May 8 00:13:08.433627 kernel: loop5: detected capacity change from 0 to 8 May 8 00:13:08.440628 kernel: loop6: detected capacity change from 0 to 138176 May 8 00:13:08.471691 kernel: loop7: detected capacity change from 0 to 218376 May 8 00:13:08.500918 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 8 00:13:08.502091 (sd-merge)[1192]: Merged extensions into '/usr'. May 8 00:13:08.509594 systemd[1]: Reload requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:13:08.509628 systemd[1]: Reloading... May 8 00:13:08.621663 zram_generator::config[1220]: No configuration found. May 8 00:13:08.651450 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:13:08.758244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:13:08.822168 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:13:08.822518 systemd[1]: Reloading finished in 312 ms. May 8 00:13:08.843623 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:13:08.844835 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:13:08.845796 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:13:08.867161 systemd[1]: Starting ensure-sysext.service... May 8 00:13:08.869763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:13:08.874972 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:13:08.898733 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:13:08.899523 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:13:08.899702 systemd[1]: Reload requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... May 8 00:13:08.899757 systemd[1]: Reloading... May 8 00:13:08.900878 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:13:08.901404 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. May 8 00:13:08.901534 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. May 8 00:13:08.907412 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:13:08.907423 systemd-tmpfiles[1265]: Skipping /boot May 8 00:13:08.931393 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:13:08.931481 systemd-tmpfiles[1265]: Skipping /boot May 8 00:13:08.948168 systemd-udevd[1266]: Using default interface naming scheme 'v255'. May 8 00:13:08.998795 zram_generator::config[1298]: No configuration found. May 8 00:13:09.172636 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1301) May 8 00:13:09.208503 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:13:09.245626 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:13:09.261653 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:13:09.264784 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:13:09.264995 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:13:09.265285 kernel: ACPI: button: Power Button [PWRF] May 8 00:13:09.275631 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:13:09.285710 kernel: EDAC MC: Ver: 3.0.0 May 8 00:13:09.325939 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:13:09.326574 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 8 00:13:09.327475 systemd[1]: Reloading finished in 427 ms. May 8 00:13:09.335539 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:13:09.337004 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:13:09.370690 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:13:09.387386 systemd[1]: Finished ensure-sysext.service. May 8 00:13:09.392950 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:13:09.411539 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:13:09.417769 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:13:09.420756 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:13:09.421467 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:13:09.424836 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:13:09.430846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:13:09.442294 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:13:09.446471 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:13:09.454946 lvm[1376]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:13:09.455369 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:13:09.456309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:13:09.460603 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:13:09.461732 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:13:09.468777 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:13:09.480157 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:13:09.484688 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:13:09.490011 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:13:09.501721 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:13:09.513141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:13:09.514392 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:13:09.516491 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:13:09.518159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:13:09.519468 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:13:09.524407 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:13:09.524650 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:13:09.525421 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:13:09.526476 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:13:09.527358 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:13:09.528232 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:13:09.536795 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:13:09.547113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:13:09.556928 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:13:09.557540 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:13:09.557602 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:13:09.560203 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:13:09.562710 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:13:09.563775 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:13:09.567542 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:13:09.569871 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:13:09.577307 augenrules[1419]: No rules May 8 00:13:09.577535 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:13:09.580465 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:13:09.581001 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:13:09.590823 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:13:09.602855 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:13:09.621812 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:13:09.633405 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:13:09.700677 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:13:09.752453 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:13:09.756565 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:13:09.773593 systemd-resolved[1391]: Positive Trust Anchors: May 8 00:13:09.773929 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:13:09.774041 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:13:09.778842 systemd-resolved[1391]: Defaulting to hostname 'linux'. May 8 00:13:09.780864 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:13:09.781593 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:13:09.782419 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:13:09.782503 systemd-networkd[1388]: lo: Link UP May 8 00:13:09.782514 systemd-networkd[1388]: lo: Gained carrier May 8 00:13:09.783582 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:13:09.784211 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:13:09.785056 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:13:09.785243 systemd-networkd[1388]: Enumeration completed May 8 00:13:09.785667 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:13:09.785677 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:13:09.785806 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:13:09.786740 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:13:09.786893 systemd-networkd[1388]: eth0: Link UP May 8 00:13:09.786912 systemd-networkd[1388]: eth0: Gained carrier May 8 00:13:09.786926 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:13:09.787487 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:13:09.787521 systemd[1]: Reached target paths.target - Path Units. May 8 00:13:09.788088 systemd[1]: Reached target timers.target - Timer Units. May 8 00:13:09.790174 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:13:09.792376 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:13:09.795601 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:13:09.796413 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 8 00:13:09.797068 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 8 00:13:09.800469 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:13:09.801689 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:13:09.802885 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:13:09.803858 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:13:09.804722 systemd[1]: Reached target network.target - Network. May 8 00:13:09.805261 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:13:09.805821 systemd[1]: Reached target basic.target - Basic System. May 8 00:13:09.806390 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:13:09.806481 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:13:09.811691 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:13:09.815338 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 00:13:09.817795 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:13:09.820740 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:13:09.826709 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:13:09.827232 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:13:09.828738 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:13:09.830707 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:13:09.835763 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:13:09.847748 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:13:09.859812 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:13:09.862776 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:13:09.866581 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:13:09.868303 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:13:09.869868 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:13:09.871807 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:13:09.875739 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:13:09.889471 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:13:09.890677 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:13:09.893270 jq[1460]: true May 8 00:13:09.918886 jq[1447]: false May 8 00:13:09.928277 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:13:09.928552 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:13:09.938419 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:13:09.940290 dbus-daemon[1446]: [system] SELinux support is enabled May 8 00:13:09.940442 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:13:09.946078 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:13:09.946111 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:13:09.947009 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:13:09.947225 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:13:09.969720 jq[1464]: true May 8 00:13:09.981589 extend-filesystems[1448]: Found loop4 May 8 00:13:09.981589 extend-filesystems[1448]: Found loop5 May 8 00:13:09.981589 extend-filesystems[1448]: Found loop6 May 8 00:13:09.981589 extend-filesystems[1448]: Found loop7 May 8 00:13:09.981589 extend-filesystems[1448]: Found sda May 8 00:13:09.981589 extend-filesystems[1448]: Found sda1 May 8 00:13:09.981589 extend-filesystems[1448]: Found sda2 May 8 00:13:09.981589 extend-filesystems[1448]: Found sda3 May 8 00:13:09.981589 extend-filesystems[1448]: Found usr May 8 00:13:09.981589 extend-filesystems[1448]: Found sda4 May 8 00:13:09.981589 extend-filesystems[1448]: Found sda6 May 8 00:13:09.981589 extend-filesystems[1448]: Found sda7 May 8 00:13:09.981589 extend-filesystems[1448]: Found sda9 May 8 00:13:09.981589 extend-filesystems[1448]: Checking size of /dev/sda9 May 8 00:13:09.975348 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:13:10.058524 update_engine[1459]: I20250508 00:13:10.031566 1459 main.cc:92] Flatcar Update Engine starting May 8 00:13:10.058524 update_engine[1459]: I20250508 00:13:10.040737 1459 update_check_scheduler.cc:74] Next update check in 2m35s May 8 00:13:10.058790 coreos-metadata[1445]: May 08 00:13:10.026 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 8 00:13:10.059001 extend-filesystems[1448]: Resized partition /dev/sda9 May 8 00:13:10.060676 tar[1463]: linux-amd64/LICENSE May 8 00:13:10.060676 tar[1463]: linux-amd64/helm May 8 00:13:10.020398 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:13:10.020760 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:13:10.064708 bash[1501]: Updated "/home/core/.ssh/authorized_keys" May 8 00:13:10.038511 systemd[1]: Started update-engine.service - Update Engine. May 8 00:13:10.045824 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:13:10.050826 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:13:10.057837 systemd[1]: Starting sshkeys.service... May 8 00:13:10.073495 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) May 8 00:13:10.093803 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 8 00:13:10.090467 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 8 00:13:10.096938 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 8 00:13:10.190062 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:13:10.190117 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:13:10.190471 systemd-logind[1455]: New seat seat0. May 8 00:13:10.192674 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1305) May 8 00:13:10.208046 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:13:10.290489 systemd-networkd[1388]: eth0: DHCPv4 address 172.232.9.214/24, gateway 172.232.9.1 acquired from 23.34.57.43 May 8 00:13:10.291082 dbus-daemon[1446]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1388 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 8 00:13:10.303801 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 8 00:13:10.304779 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. May 8 00:13:10.356701 coreos-metadata[1509]: May 08 00:13:10.352 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 8 00:13:10.423323 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:13:10.445905 containerd[1469]: time="2025-05-08T00:13:10.445278449Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:13:10.457475 coreos-metadata[1509]: May 08 00:13:10.457 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 8 00:13:10.473689 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 8 00:13:10.482026 extend-filesystems[1507]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 8 00:13:10.482026 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 10 May 8 00:13:10.482026 extend-filesystems[1507]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 8 00:13:10.489408 extend-filesystems[1448]: Resized filesystem in /dev/sda9 May 8 00:13:10.484215 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:13:10.484493 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:13:10.532080 containerd[1469]: time="2025-05-08T00:13:10.531824912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:13:10.533548 containerd[1469]: time="2025-05-08T00:13:10.533518553Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:13:10.535626 containerd[1469]: time="2025-05-08T00:13:10.533640453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:13:10.535626 containerd[1469]: time="2025-05-08T00:13:10.533662933Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:13:10.535626 containerd[1469]: time="2025-05-08T00:13:10.533914913Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:13:10.535626 containerd[1469]: time="2025-05-08T00:13:10.533930783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:13:10.535626 containerd[1469]: time="2025-05-08T00:13:10.533998063Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:13:10.535626 containerd[1469]: time="2025-05-08T00:13:10.534009993Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:13:10.535626 containerd[1469]: time="2025-05-08T00:13:10.534206983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:13:10.535626 containerd[1469]: time="2025-05-08T00:13:10.534219663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:13:10.535626 containerd[1469]: time="2025-05-08T00:13:10.534231913Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:13:10.535626 containerd[1469]: time="2025-05-08T00:13:10.534240193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:13:10.535626 containerd[1469]: time="2025-05-08T00:13:10.534329633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:13:10.535626 containerd[1469]: time="2025-05-08T00:13:10.534544844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:13:10.535853 containerd[1469]: time="2025-05-08T00:13:10.534713734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:13:10.535853 containerd[1469]: time="2025-05-08T00:13:10.534727374Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:13:10.535853 containerd[1469]: time="2025-05-08T00:13:10.534834884Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:13:10.535853 containerd[1469]: time="2025-05-08T00:13:10.534898944Z" level=info msg="metadata content store policy set" policy=shared May 8 00:13:10.539269 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 8 00:13:10.541335 dbus-daemon[1446]: [system] Successfully activated service 'org.freedesktop.hostname1' May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.541534697Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.541587837Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.541633087Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.541661517Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.541680897Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.541831487Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.542098907Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.542218617Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.542234057Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.542247017Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.542259167Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.542272757Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.542283567Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:13:10.543659 containerd[1469]: time="2025-05-08T00:13:10.542295547Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542308357Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542320437Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542331857Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542341467Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542364197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542376577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542402657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542415757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542428527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542447677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542463227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542474458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542485798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:13:10.543970 containerd[1469]: time="2025-05-08T00:13:10.542499478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542509068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542519738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542534118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542552078Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542576668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542604018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542642298Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542735278Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542753648Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542765648Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542776398Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542784888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542795748Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:13:10.544193 containerd[1469]: time="2025-05-08T00:13:10.542805368Z" level=info msg="NRI interface is disabled by configuration." May 8 00:13:10.544476 containerd[1469]: time="2025-05-08T00:13:10.542814438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:13:10.544497 containerd[1469]: time="2025-05-08T00:13:10.543074338Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:13:10.544497 containerd[1469]: time="2025-05-08T00:13:10.543115628Z" level=info msg="Connect containerd service" May 8 00:13:10.544497 containerd[1469]: time="2025-05-08T00:13:10.543144418Z" level=info msg="using legacy CRI server" May 8 00:13:10.544497 containerd[1469]: time="2025-05-08T00:13:10.543151358Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:13:10.544497 containerd[1469]: time="2025-05-08T00:13:10.543269378Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:13:10.545529 dbus-daemon[1446]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1522 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 8 00:13:10.548150 containerd[1469]: time="2025-05-08T00:13:10.548103480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:13:10.548974 containerd[1469]: time="2025-05-08T00:13:10.548406380Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:13:10.548974 containerd[1469]: time="2025-05-08T00:13:10.548463820Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:13:10.548974 containerd[1469]: time="2025-05-08T00:13:10.548506051Z" level=info msg="Start subscribing containerd event" May 8 00:13:10.548974 containerd[1469]: time="2025-05-08T00:13:10.548544381Z" level=info msg="Start recovering state" May 8 00:13:10.555893 systemd[1]: Starting polkit.service - Authorization Manager... May 8 00:13:10.558566 containerd[1469]: time="2025-05-08T00:13:10.557237245Z" level=info msg="Start event monitor" May 8 00:13:10.558566 containerd[1469]: time="2025-05-08T00:13:10.557276325Z" level=info msg="Start snapshots syncer" May 8 00:13:10.558566 containerd[1469]: time="2025-05-08T00:13:10.557287845Z" level=info msg="Start cni network conf syncer for default" May 8 00:13:10.558566 containerd[1469]: time="2025-05-08T00:13:10.557303325Z" level=info msg="Start streaming server" May 8 00:13:10.557407 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:13:10.559480 containerd[1469]: time="2025-05-08T00:13:10.559457696Z" level=info msg="containerd successfully booted in 0.118063s" May 8 00:13:10.576373 polkitd[1531]: Started polkitd version 121 May 8 00:13:10.586574 polkitd[1531]: Loading rules from directory /etc/polkit-1/rules.d May 8 00:13:10.587895 polkitd[1531]: Loading rules from directory /usr/share/polkit-1/rules.d May 8 00:13:10.591206 polkitd[1531]: Finished loading, compiling and executing 2 rules May 8 00:13:10.591656 dbus-daemon[1446]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 8 00:13:10.591828 systemd[1]: Started polkit.service - Authorization Manager. May 8 00:13:10.592088 polkitd[1531]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 8 00:13:10.595723 coreos-metadata[1509]: May 08 00:13:10.595 INFO Fetch successful May 8 00:13:10.626159 systemd-hostnamed[1522]: Hostname set to <172-232-9-214> (transient) May 8 00:13:10.627344 systemd-resolved[1391]: System hostname changed to '172-232-9-214'. May 8 00:13:10.630604 update-ssh-keys[1541]: Updated "/home/core/.ssh/authorized_keys" May 8 00:13:10.632370 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 8 00:13:10.637391 systemd[1]: Finished sshkeys.service. May 8 00:13:10.715563 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:13:10.742158 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:13:10.750139 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:13:10.760306 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:13:10.760891 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:13:10.769857 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:13:10.782066 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:13:10.790004 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:13:10.795797 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:13:10.796643 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:13:10.931451 tar[1463]: linux-amd64/README.md May 8 00:13:10.944420 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:13:11.036842 coreos-metadata[1445]: May 08 00:13:11.036 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 8 00:13:11.142501 coreos-metadata[1445]: May 08 00:13:11.142 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 8 00:13:11.328824 coreos-metadata[1445]: May 08 00:13:11.328 INFO Fetch successful May 8 00:13:11.328824 coreos-metadata[1445]: May 08 00:13:11.328 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 8 00:13:11.588253 coreos-metadata[1445]: May 08 00:13:11.588 INFO Fetch successful May 8 00:13:11.637808 systemd-networkd[1388]: eth0: Gained IPv6LL May 8 00:13:11.638422 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. May 8 00:13:11.643799 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:13:11.653729 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:13:11.657836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:11.670080 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:13:11.702811 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:13:11.706744 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 00:13:11.709068 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:13:12.594622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:12.595569 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:13:12.596878 systemd[1]: Startup finished in 855ms (kernel) + 8.198s (initrd) + 5.740s (userspace) = 14.794s. May 8 00:13:12.640258 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:13:13.141298 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. May 8 00:13:13.162026 kubelet[1602]: E0508 00:13:13.161977 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:13:13.166340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:13:13.166555 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:13:13.167113 systemd[1]: kubelet.service: Consumed 924ms CPU time, 249.7M memory peak. May 8 00:13:14.053765 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:13:14.058854 systemd[1]: Started sshd@0-172.232.9.214:22-139.178.89.65:35174.service - OpenSSH per-connection server daemon (139.178.89.65:35174). May 8 00:13:14.404512 sshd[1614]: Accepted publickey for core from 139.178.89.65 port 35174 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:13:14.407201 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:14.419222 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:13:14.424090 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:13:14.425899 systemd-logind[1455]: New session 1 of user core. May 8 00:13:14.440699 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:13:14.445996 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:13:14.451056 (systemd)[1618]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:13:14.453702 systemd-logind[1455]: New session c1 of user core. May 8 00:13:14.584585 systemd[1618]: Queued start job for default target default.target. May 8 00:13:14.593145 systemd[1618]: Created slice app.slice - User Application Slice. May 8 00:13:14.593175 systemd[1618]: Reached target paths.target - Paths. May 8 00:13:14.593222 systemd[1618]: Reached target timers.target - Timers. May 8 00:13:14.595205 systemd[1618]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:13:14.607096 systemd[1618]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:13:14.607211 systemd[1618]: Reached target sockets.target - Sockets. May 8 00:13:14.607248 systemd[1618]: Reached target basic.target - Basic System. May 8 00:13:14.607293 systemd[1618]: Reached target default.target - Main User Target. May 8 00:13:14.607326 systemd[1618]: Startup finished in 146ms. May 8 00:13:14.607580 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:13:14.615725 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:13:14.877850 systemd[1]: Started sshd@1-172.232.9.214:22-139.178.89.65:35178.service - OpenSSH per-connection server daemon (139.178.89.65:35178). May 8 00:13:15.093517 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. May 8 00:13:15.204883 sshd[1629]: Accepted publickey for core from 139.178.89.65 port 35178 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:13:15.206548 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:15.218856 systemd-logind[1455]: New session 2 of user core. May 8 00:13:15.225032 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:13:15.449569 sshd[1631]: Connection closed by 139.178.89.65 port 35178 May 8 00:13:15.450353 sshd-session[1629]: pam_unix(sshd:session): session closed for user core May 8 00:13:15.453324 systemd[1]: sshd@1-172.232.9.214:22-139.178.89.65:35178.service: Deactivated successfully. May 8 00:13:15.455262 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:13:15.456867 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. May 8 00:13:15.457863 systemd-logind[1455]: Removed session 2. May 8 00:13:15.509713 systemd[1]: Started sshd@2-172.232.9.214:22-139.178.89.65:35190.service - OpenSSH per-connection server daemon (139.178.89.65:35190). May 8 00:13:15.840420 sshd[1637]: Accepted publickey for core from 139.178.89.65 port 35190 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:13:15.842195 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:15.846934 systemd-logind[1455]: New session 3 of user core. May 8 00:13:15.856730 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:13:16.082531 sshd[1639]: Connection closed by 139.178.89.65 port 35190 May 8 00:13:16.083455 sshd-session[1637]: pam_unix(sshd:session): session closed for user core May 8 00:13:16.088293 systemd[1]: sshd@2-172.232.9.214:22-139.178.89.65:35190.service: Deactivated successfully. May 8 00:13:16.091249 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:13:16.092528 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. May 8 00:13:16.094701 systemd-logind[1455]: Removed session 3. May 8 00:13:16.152862 systemd[1]: Started sshd@3-172.232.9.214:22-139.178.89.65:56104.service - OpenSSH per-connection server daemon (139.178.89.65:56104). May 8 00:13:16.498945 sshd[1645]: Accepted publickey for core from 139.178.89.65 port 56104 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:13:16.500887 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:16.506785 systemd-logind[1455]: New session 4 of user core. May 8 00:13:16.516725 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:13:16.747589 sshd[1647]: Connection closed by 139.178.89.65 port 56104 May 8 00:13:16.748518 sshd-session[1645]: pam_unix(sshd:session): session closed for user core May 8 00:13:16.754218 systemd[1]: sshd@3-172.232.9.214:22-139.178.89.65:56104.service: Deactivated successfully. May 8 00:13:16.756969 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:13:16.758314 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. May 8 00:13:16.759463 systemd-logind[1455]: Removed session 4. May 8 00:13:16.813715 systemd[1]: Started sshd@4-172.232.9.214:22-139.178.89.65:56114.service - OpenSSH per-connection server daemon (139.178.89.65:56114). May 8 00:13:17.141075 sshd[1653]: Accepted publickey for core from 139.178.89.65 port 56114 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:13:17.142756 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:17.148404 systemd-logind[1455]: New session 5 of user core. May 8 00:13:17.154717 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:13:17.345261 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:13:17.345595 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:13:17.362750 sudo[1656]: pam_unix(sudo:session): session closed for user root May 8 00:13:17.412519 sshd[1655]: Connection closed by 139.178.89.65 port 56114 May 8 00:13:17.413429 sshd-session[1653]: pam_unix(sshd:session): session closed for user core May 8 00:13:17.418416 systemd[1]: sshd@4-172.232.9.214:22-139.178.89.65:56114.service: Deactivated successfully. May 8 00:13:17.421169 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:13:17.422534 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. May 8 00:13:17.423513 systemd-logind[1455]: Removed session 5. May 8 00:13:17.480855 systemd[1]: Started sshd@5-172.232.9.214:22-139.178.89.65:56130.service - OpenSSH per-connection server daemon (139.178.89.65:56130). May 8 00:13:17.821654 sshd[1662]: Accepted publickey for core from 139.178.89.65 port 56130 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:13:17.822916 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:17.827362 systemd-logind[1455]: New session 6 of user core. May 8 00:13:17.834723 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:13:18.022823 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:13:18.023144 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:13:18.027325 sudo[1666]: pam_unix(sudo:session): session closed for user root May 8 00:13:18.033459 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:13:18.033800 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:13:18.046838 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:13:18.079102 augenrules[1688]: No rules May 8 00:13:18.079561 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:13:18.079860 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:13:18.080915 sudo[1665]: pam_unix(sudo:session): session closed for user root May 8 00:13:18.133362 sshd[1664]: Connection closed by 139.178.89.65 port 56130 May 8 00:13:18.133805 sshd-session[1662]: pam_unix(sshd:session): session closed for user core May 8 00:13:18.137671 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. May 8 00:13:18.137943 systemd[1]: sshd@5-172.232.9.214:22-139.178.89.65:56130.service: Deactivated successfully. May 8 00:13:18.139996 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:13:18.140835 systemd-logind[1455]: Removed session 6. May 8 00:13:18.191766 systemd[1]: Started sshd@6-172.232.9.214:22-139.178.89.65:56136.service - OpenSSH per-connection server daemon (139.178.89.65:56136). May 8 00:13:18.528571 sshd[1697]: Accepted publickey for core from 139.178.89.65 port 56136 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:13:18.529798 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:13:18.535817 systemd-logind[1455]: New session 7 of user core. May 8 00:13:18.542739 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:13:18.724950 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:13:18.725459 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:13:19.003024 (dockerd)[1716]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:13:19.003238 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:13:19.288295 dockerd[1716]: time="2025-05-08T00:13:19.288115987Z" level=info msg="Starting up" May 8 00:13:19.382138 dockerd[1716]: time="2025-05-08T00:13:19.382099814Z" level=info msg="Loading containers: start." May 8 00:13:19.564667 kernel: Initializing XFRM netlink socket May 8 00:13:19.593782 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. May 8 00:13:19.595260 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. May 8 00:13:19.601561 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. May 8 00:13:19.652032 systemd-networkd[1388]: docker0: Link UP May 8 00:13:19.652271 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. May 8 00:13:19.682223 dockerd[1716]: time="2025-05-08T00:13:19.682173034Z" level=info msg="Loading containers: done." May 8 00:13:19.700350 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2207964475-merged.mount: Deactivated successfully. May 8 00:13:19.701209 dockerd[1716]: time="2025-05-08T00:13:19.700483973Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:13:19.701209 dockerd[1716]: time="2025-05-08T00:13:19.700562403Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:13:19.701209 dockerd[1716]: time="2025-05-08T00:13:19.700773643Z" level=info msg="Daemon has completed initialization" May 8 00:13:19.728654 dockerd[1716]: time="2025-05-08T00:13:19.728090337Z" level=info msg="API listen on /run/docker.sock" May 8 00:13:19.732686 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:13:20.329433 containerd[1469]: time="2025-05-08T00:13:20.328862507Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 8 00:13:21.259876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1138248101.mount: Deactivated successfully. May 8 00:13:22.796320 containerd[1469]: time="2025-05-08T00:13:22.795184920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:22.796320 containerd[1469]: time="2025-05-08T00:13:22.796276530Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 8 00:13:22.796875 containerd[1469]: time="2025-05-08T00:13:22.796846290Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:22.799493 containerd[1469]: time="2025-05-08T00:13:22.799464862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:22.800734 containerd[1469]: time="2025-05-08T00:13:22.800707522Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.471799335s" May 8 00:13:22.800783 containerd[1469]: time="2025-05-08T00:13:22.800738712Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 8 00:13:22.801789 containerd[1469]: time="2025-05-08T00:13:22.801763533Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 8 00:13:23.268908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:13:23.273950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:23.425053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:23.429104 (kubelet)[1963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:13:23.479438 kubelet[1963]: E0508 00:13:23.479395 1963 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:13:23.485558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:13:23.485767 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:13:23.486133 systemd[1]: kubelet.service: Consumed 196ms CPU time, 105.5M memory peak. May 8 00:13:24.598303 containerd[1469]: time="2025-05-08T00:13:24.597966390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:24.599569 containerd[1469]: time="2025-05-08T00:13:24.599521241Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 8 00:13:24.600129 containerd[1469]: time="2025-05-08T00:13:24.600067081Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:24.603085 containerd[1469]: time="2025-05-08T00:13:24.603062173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:24.604019 containerd[1469]: time="2025-05-08T00:13:24.603825143Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.80203413s" May 8 00:13:24.604019 containerd[1469]: time="2025-05-08T00:13:24.603859633Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 8 00:13:24.605189 containerd[1469]: time="2025-05-08T00:13:24.604750654Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 8 00:13:26.327228 containerd[1469]: time="2025-05-08T00:13:26.327183714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:26.328095 containerd[1469]: time="2025-05-08T00:13:26.328057785Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 8 00:13:26.328517 containerd[1469]: time="2025-05-08T00:13:26.328479125Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:26.330980 containerd[1469]: time="2025-05-08T00:13:26.330944766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:26.332442 containerd[1469]: time="2025-05-08T00:13:26.332265817Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.727485743s" May 8 00:13:26.332442 containerd[1469]: time="2025-05-08T00:13:26.332291277Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 8 00:13:26.332842 containerd[1469]: time="2025-05-08T00:13:26.332825447Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 00:13:27.458867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3415561329.mount: Deactivated successfully. May 8 00:13:28.291124 containerd[1469]: time="2025-05-08T00:13:28.290867436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:28.292166 containerd[1469]: time="2025-05-08T00:13:28.291743776Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 8 00:13:28.293113 containerd[1469]: time="2025-05-08T00:13:28.292991307Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:28.297636 containerd[1469]: time="2025-05-08T00:13:28.294796038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:28.297636 containerd[1469]: time="2025-05-08T00:13:28.295571268Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.962653701s" May 8 00:13:28.297636 containerd[1469]: time="2025-05-08T00:13:28.295594438Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 8 00:13:28.298255 containerd[1469]: time="2025-05-08T00:13:28.298228119Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 8 00:13:29.143992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2727033344.mount: Deactivated successfully. May 8 00:13:29.997030 containerd[1469]: time="2025-05-08T00:13:29.996962178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:29.998084 containerd[1469]: time="2025-05-08T00:13:29.997979359Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 8 00:13:29.999337 containerd[1469]: time="2025-05-08T00:13:29.998994879Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:30.002502 containerd[1469]: time="2025-05-08T00:13:30.001486430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:30.002502 containerd[1469]: time="2025-05-08T00:13:30.002376001Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.704119512s" May 8 00:13:30.002502 containerd[1469]: time="2025-05-08T00:13:30.002400091Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 8 00:13:30.003505 containerd[1469]: time="2025-05-08T00:13:30.003487381Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:13:30.670867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1394262592.mount: Deactivated successfully. May 8 00:13:30.674849 containerd[1469]: time="2025-05-08T00:13:30.674792697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:30.676082 containerd[1469]: time="2025-05-08T00:13:30.675819677Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 8 00:13:30.678630 containerd[1469]: time="2025-05-08T00:13:30.677758328Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:30.681701 containerd[1469]: time="2025-05-08T00:13:30.681662660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:30.683036 containerd[1469]: time="2025-05-08T00:13:30.683009721Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 679.44348ms" May 8 00:13:30.683138 containerd[1469]: time="2025-05-08T00:13:30.683040951Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:13:30.683743 containerd[1469]: time="2025-05-08T00:13:30.683707851Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 8 00:13:31.481853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2853496682.mount: Deactivated successfully. May 8 00:13:33.521645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:13:33.533751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:33.724840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:33.726058 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:13:33.769153 containerd[1469]: time="2025-05-08T00:13:33.769102273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:33.772206 containerd[1469]: time="2025-05-08T00:13:33.772153564Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 8 00:13:33.774753 containerd[1469]: time="2025-05-08T00:13:33.773643735Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:33.778657 containerd[1469]: time="2025-05-08T00:13:33.778533487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:33.782322 containerd[1469]: time="2025-05-08T00:13:33.782090569Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.098353998s" May 8 00:13:33.782322 containerd[1469]: time="2025-05-08T00:13:33.782316919Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 8 00:13:33.791642 kubelet[2106]: E0508 00:13:33.787041 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:13:33.794939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:13:33.795401 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:13:33.795833 systemd[1]: kubelet.service: Consumed 184ms CPU time, 105.8M memory peak. May 8 00:13:37.496702 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:37.497539 systemd[1]: kubelet.service: Consumed 184ms CPU time, 105.8M memory peak. May 8 00:13:37.504885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:37.543474 systemd[1]: Reload requested from client PID 2137 ('systemctl') (unit session-7.scope)... May 8 00:13:37.543490 systemd[1]: Reloading... May 8 00:13:37.699641 zram_generator::config[2185]: No configuration found. May 8 00:13:37.813043 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:13:37.910401 systemd[1]: Reloading finished in 366 ms. May 8 00:13:37.957824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:37.965465 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:37.967292 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:13:37.967573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:37.967648 systemd[1]: kubelet.service: Consumed 134ms CPU time, 91.7M memory peak. May 8 00:13:37.969640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:38.155077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:38.163568 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:13:38.207965 kubelet[2238]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:13:38.207965 kubelet[2238]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:13:38.207965 kubelet[2238]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:13:38.208277 kubelet[2238]: I0508 00:13:38.208032 2238 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:13:38.572624 kubelet[2238]: I0508 00:13:38.571173 2238 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:13:38.572624 kubelet[2238]: I0508 00:13:38.571227 2238 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:13:38.572624 kubelet[2238]: I0508 00:13:38.571770 2238 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:13:38.601895 kubelet[2238]: E0508 00:13:38.601866 2238 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.232.9.214:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.9.214:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:38.603072 kubelet[2238]: I0508 00:13:38.603057 2238 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:13:38.614295 kubelet[2238]: E0508 00:13:38.614232 2238 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:13:38.614295 kubelet[2238]: I0508 00:13:38.614265 2238 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:13:38.618383 kubelet[2238]: I0508 00:13:38.618362 2238 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:13:38.619855 kubelet[2238]: I0508 00:13:38.619806 2238 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:13:38.620019 kubelet[2238]: I0508 00:13:38.619845 2238 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-9-214","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:13:38.620019 kubelet[2238]: I0508 00:13:38.620016 2238 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:13:38.620171 kubelet[2238]: I0508 00:13:38.620026 2238 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:13:38.620171 kubelet[2238]: I0508 00:13:38.620155 2238 state_mem.go:36] "Initialized new in-memory state store" May 8 00:13:38.624750 kubelet[2238]: I0508 00:13:38.624552 2238 kubelet.go:446] "Attempting to sync node with API server" May 8 00:13:38.624750 kubelet[2238]: I0508 00:13:38.624579 2238 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:13:38.624750 kubelet[2238]: I0508 00:13:38.624596 2238 kubelet.go:352] "Adding apiserver pod source" May 8 00:13:38.624750 kubelet[2238]: I0508 00:13:38.624627 2238 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:13:38.628907 kubelet[2238]: W0508 00:13:38.628269 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.232.9.214:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-9-214&limit=500&resourceVersion=0": dial tcp 172.232.9.214:6443: connect: connection refused May 8 00:13:38.628907 kubelet[2238]: E0508 00:13:38.628355 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.232.9.214:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-9-214&limit=500&resourceVersion=0\": dial tcp 172.232.9.214:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:38.629653 kubelet[2238]: W0508 00:13:38.629591 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.232.9.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.232.9.214:6443: connect: connection refused May 8 00:13:38.629789 kubelet[2238]: E0508 00:13:38.629744 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.232.9.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.9.214:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:38.629883 kubelet[2238]: I0508 00:13:38.629859 2238 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:13:38.630675 kubelet[2238]: I0508 00:13:38.630299 2238 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:13:38.630675 kubelet[2238]: W0508 00:13:38.630393 2238 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:13:38.633415 kubelet[2238]: I0508 00:13:38.633017 2238 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:13:38.633415 kubelet[2238]: I0508 00:13:38.633079 2238 server.go:1287] "Started kubelet" May 8 00:13:38.641747 kubelet[2238]: E0508 00:13:38.640193 2238 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.9.214:6443/api/v1/namespaces/default/events\": dial tcp 172.232.9.214:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-9-214.183d64f170dcd593 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-9-214,UID:172-232-9-214,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-9-214,},FirstTimestamp:2025-05-08 00:13:38.633037203 +0000 UTC m=+0.464272053,LastTimestamp:2025-05-08 00:13:38.633037203 +0000 UTC m=+0.464272053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-9-214,}" May 8 00:13:38.643653 kubelet[2238]: I0508 00:13:38.642382 2238 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:13:38.643653 kubelet[2238]: I0508 00:13:38.642751 2238 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:13:38.643653 kubelet[2238]: I0508 00:13:38.642810 2238 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:13:38.643979 kubelet[2238]: I0508 00:13:38.643965 2238 server.go:490] "Adding debug handlers to kubelet server" May 8 00:13:38.645458 kubelet[2238]: I0508 00:13:38.645362 2238 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:13:38.648302 kubelet[2238]: E0508 00:13:38.648273 2238 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:13:38.648707 kubelet[2238]: I0508 00:13:38.648695 2238 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:13:38.648912 kubelet[2238]: I0508 00:13:38.648897 2238 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:13:38.651246 kubelet[2238]: I0508 00:13:38.651228 2238 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:13:38.651360 kubelet[2238]: I0508 00:13:38.651350 2238 reconciler.go:26] "Reconciler: start to sync state" May 8 00:13:38.651878 kubelet[2238]: E0508 00:13:38.651859 2238 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172-232-9-214\" not found" May 8 00:13:38.653525 kubelet[2238]: E0508 00:13:38.653496 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.9.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-9-214?timeout=10s\": dial tcp 172.232.9.214:6443: connect: connection refused" interval="200ms" May 8 00:13:38.653823 kubelet[2238]: I0508 00:13:38.653717 2238 factory.go:221] Registration of the systemd container factory successfully May 8 00:13:38.653946 kubelet[2238]: I0508 00:13:38.653930 2238 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:13:38.654324 kubelet[2238]: W0508 00:13:38.654294 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.232.9.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.232.9.214:6443: connect: connection refused May 8 00:13:38.655013 kubelet[2238]: E0508 00:13:38.654995 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.232.9.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.9.214:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:38.656346 kubelet[2238]: I0508 00:13:38.656330 2238 factory.go:221] Registration of the containerd container factory successfully May 8 00:13:38.668762 kubelet[2238]: I0508 00:13:38.668739 2238 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:13:38.670236 kubelet[2238]: I0508 00:13:38.670221 2238 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:13:38.670293 kubelet[2238]: I0508 00:13:38.670284 2238 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:13:38.670355 kubelet[2238]: I0508 00:13:38.670345 2238 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:13:38.670400 kubelet[2238]: I0508 00:13:38.670392 2238 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:13:38.670497 kubelet[2238]: E0508 00:13:38.670481 2238 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:13:38.678515 kubelet[2238]: W0508 00:13:38.678444 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.232.9.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.232.9.214:6443: connect: connection refused May 8 00:13:38.678568 kubelet[2238]: E0508 00:13:38.678545 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.232.9.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.9.214:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:38.691910 kubelet[2238]: I0508 00:13:38.691887 2238 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:13:38.691910 kubelet[2238]: I0508 00:13:38.691906 2238 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:13:38.691983 kubelet[2238]: I0508 00:13:38.691925 2238 state_mem.go:36] "Initialized new in-memory state store" May 8 00:13:38.693967 kubelet[2238]: I0508 00:13:38.693937 2238 policy_none.go:49] "None policy: Start" May 8 00:13:38.693967 kubelet[2238]: I0508 00:13:38.693968 2238 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:13:38.694047 kubelet[2238]: I0508 00:13:38.693982 2238 state_mem.go:35] "Initializing new in-memory state store" May 8 00:13:38.702097 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:13:38.723621 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:13:38.727632 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:13:38.738934 kubelet[2238]: I0508 00:13:38.738472 2238 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:13:38.738934 kubelet[2238]: I0508 00:13:38.738720 2238 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:13:38.738934 kubelet[2238]: I0508 00:13:38.738734 2238 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:13:38.739029 kubelet[2238]: I0508 00:13:38.738955 2238 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:13:38.739837 kubelet[2238]: E0508 00:13:38.739805 2238 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:13:38.740014 kubelet[2238]: E0508 00:13:38.739984 2238 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-9-214\" not found" May 8 00:13:38.794331 systemd[1]: Created slice kubepods-burstable-pode3c46f8876d5b782136b9be3963f1566.slice - libcontainer container kubepods-burstable-pode3c46f8876d5b782136b9be3963f1566.slice. May 8 00:13:38.810377 kubelet[2238]: E0508 00:13:38.808472 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-9-214\" not found" node="172-232-9-214" May 8 00:13:38.814128 systemd[1]: Created slice kubepods-burstable-pode73ec5816fcab21098f812ab1ebd55d5.slice - libcontainer container kubepods-burstable-pode73ec5816fcab21098f812ab1ebd55d5.slice. May 8 00:13:38.821823 kubelet[2238]: E0508 00:13:38.821796 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-9-214\" not found" node="172-232-9-214" May 8 00:13:38.834718 systemd[1]: Created slice kubepods-burstable-pod0069c317cb6a02ccea5626eab3f60f82.slice - libcontainer container kubepods-burstable-pod0069c317cb6a02ccea5626eab3f60f82.slice. May 8 00:13:38.836552 kubelet[2238]: E0508 00:13:38.836513 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-9-214\" not found" node="172-232-9-214" May 8 00:13:38.841988 kubelet[2238]: I0508 00:13:38.841953 2238 kubelet_node_status.go:76] "Attempting to register node" node="172-232-9-214" May 8 00:13:38.842519 kubelet[2238]: E0508 00:13:38.842463 2238 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.232.9.214:6443/api/v1/nodes\": dial tcp 172.232.9.214:6443: connect: connection refused" node="172-232-9-214" May 8 00:13:38.851793 kubelet[2238]: I0508 00:13:38.851753 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e73ec5816fcab21098f812ab1ebd55d5-kubeconfig\") pod \"kube-controller-manager-172-232-9-214\" (UID: \"e73ec5816fcab21098f812ab1ebd55d5\") " pod="kube-system/kube-controller-manager-172-232-9-214" May 8 00:13:38.851793 kubelet[2238]: I0508 00:13:38.851788 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e73ec5816fcab21098f812ab1ebd55d5-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-9-214\" (UID: \"e73ec5816fcab21098f812ab1ebd55d5\") " pod="kube-system/kube-controller-manager-172-232-9-214" May 8 00:13:38.851866 kubelet[2238]: I0508 00:13:38.851812 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0069c317cb6a02ccea5626eab3f60f82-kubeconfig\") pod \"kube-scheduler-172-232-9-214\" (UID: \"0069c317cb6a02ccea5626eab3f60f82\") " pod="kube-system/kube-scheduler-172-232-9-214" May 8 00:13:38.851866 kubelet[2238]: I0508 00:13:38.851832 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3c46f8876d5b782136b9be3963f1566-ca-certs\") pod \"kube-apiserver-172-232-9-214\" (UID: \"e3c46f8876d5b782136b9be3963f1566\") " pod="kube-system/kube-apiserver-172-232-9-214" May 8 00:13:38.851866 kubelet[2238]: I0508 00:13:38.851848 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3c46f8876d5b782136b9be3963f1566-k8s-certs\") pod \"kube-apiserver-172-232-9-214\" (UID: \"e3c46f8876d5b782136b9be3963f1566\") " pod="kube-system/kube-apiserver-172-232-9-214" May 8 00:13:38.851950 kubelet[2238]: I0508 00:13:38.851869 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3c46f8876d5b782136b9be3963f1566-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-9-214\" (UID: \"e3c46f8876d5b782136b9be3963f1566\") " pod="kube-system/kube-apiserver-172-232-9-214" May 8 00:13:38.851950 kubelet[2238]: I0508 00:13:38.851887 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e73ec5816fcab21098f812ab1ebd55d5-ca-certs\") pod \"kube-controller-manager-172-232-9-214\" (UID: \"e73ec5816fcab21098f812ab1ebd55d5\") " pod="kube-system/kube-controller-manager-172-232-9-214" May 8 00:13:38.851950 kubelet[2238]: I0508 00:13:38.851907 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e73ec5816fcab21098f812ab1ebd55d5-flexvolume-dir\") pod \"kube-controller-manager-172-232-9-214\" (UID: \"e73ec5816fcab21098f812ab1ebd55d5\") " pod="kube-system/kube-controller-manager-172-232-9-214" May 8 00:13:38.851950 kubelet[2238]: I0508 00:13:38.851924 2238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e73ec5816fcab21098f812ab1ebd55d5-k8s-certs\") pod \"kube-controller-manager-172-232-9-214\" (UID: \"e73ec5816fcab21098f812ab1ebd55d5\") " pod="kube-system/kube-controller-manager-172-232-9-214" May 8 00:13:38.855158 kubelet[2238]: E0508 00:13:38.855122 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.9.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-9-214?timeout=10s\": dial tcp 172.232.9.214:6443: connect: connection refused" interval="400ms" May 8 00:13:39.045759 kubelet[2238]: I0508 00:13:39.045652 2238 kubelet_node_status.go:76] "Attempting to register node" node="172-232-9-214" May 8 00:13:39.046142 kubelet[2238]: E0508 00:13:39.046106 2238 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.232.9.214:6443/api/v1/nodes\": dial tcp 172.232.9.214:6443: connect: connection refused" node="172-232-9-214" May 8 00:13:39.110136 kubelet[2238]: E0508 00:13:39.109953 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:39.111371 containerd[1469]: time="2025-05-08T00:13:39.111099352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-9-214,Uid:e3c46f8876d5b782136b9be3963f1566,Namespace:kube-system,Attempt:0,}" May 8 00:13:39.123178 kubelet[2238]: E0508 00:13:39.123140 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:39.123961 containerd[1469]: time="2025-05-08T00:13:39.123891998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-9-214,Uid:e73ec5816fcab21098f812ab1ebd55d5,Namespace:kube-system,Attempt:0,}" May 8 00:13:39.137309 kubelet[2238]: E0508 00:13:39.137266 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:39.138076 containerd[1469]: time="2025-05-08T00:13:39.138030135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-9-214,Uid:0069c317cb6a02ccea5626eab3f60f82,Namespace:kube-system,Attempt:0,}" May 8 00:13:39.255759 kubelet[2238]: E0508 00:13:39.255661 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.9.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-9-214?timeout=10s\": dial tcp 172.232.9.214:6443: connect: connection refused" interval="800ms" May 8 00:13:39.449228 kubelet[2238]: I0508 00:13:39.449130 2238 kubelet_node_status.go:76] "Attempting to register node" node="172-232-9-214" May 8 00:13:39.450101 kubelet[2238]: E0508 00:13:39.450055 2238 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.232.9.214:6443/api/v1/nodes\": dial tcp 172.232.9.214:6443: connect: connection refused" node="172-232-9-214" May 8 00:13:39.472221 kubelet[2238]: W0508 00:13:39.472150 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.232.9.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.232.9.214:6443: connect: connection refused May 8 00:13:39.472322 kubelet[2238]: E0508 00:13:39.472245 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.232.9.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.9.214:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:39.543100 kubelet[2238]: W0508 00:13:39.543010 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.232.9.214:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-9-214&limit=500&resourceVersion=0": dial tcp 172.232.9.214:6443: connect: connection refused May 8 00:13:39.543100 kubelet[2238]: E0508 00:13:39.543089 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.232.9.214:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-9-214&limit=500&resourceVersion=0\": dial tcp 172.232.9.214:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:39.559114 kubelet[2238]: W0508 00:13:39.559068 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.232.9.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.232.9.214:6443: connect: connection refused May 8 00:13:39.559161 kubelet[2238]: E0508 00:13:39.559113 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.232.9.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.9.214:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:39.786600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1645720024.mount: Deactivated successfully. May 8 00:13:39.792458 containerd[1469]: time="2025-05-08T00:13:39.791772532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:13:39.793900 containerd[1469]: time="2025-05-08T00:13:39.793088273Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:13:39.794594 containerd[1469]: time="2025-05-08T00:13:39.794328163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:13:39.795133 containerd[1469]: time="2025-05-08T00:13:39.795069944Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:13:39.796801 containerd[1469]: time="2025-05-08T00:13:39.796531834Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:13:39.797913 containerd[1469]: time="2025-05-08T00:13:39.797862745Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:13:39.801572 containerd[1469]: time="2025-05-08T00:13:39.801532967Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:13:39.804123 containerd[1469]: time="2025-05-08T00:13:39.803928438Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 679.88687ms" May 8 00:13:39.804589 containerd[1469]: time="2025-05-08T00:13:39.804566488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:13:39.806138 kubelet[2238]: W0508 00:13:39.806111 2238 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.232.9.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.232.9.214:6443: connect: connection refused May 8 00:13:39.806295 kubelet[2238]: E0508 00:13:39.806241 2238 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.232.9.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.9.214:6443: connect: connection refused" logger="UnhandledError" May 8 00:13:39.806499 containerd[1469]: time="2025-05-08T00:13:39.806469889Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 695.024997ms" May 8 00:13:39.807944 containerd[1469]: time="2025-05-08T00:13:39.807916900Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 669.786765ms" May 8 00:13:39.902746 containerd[1469]: time="2025-05-08T00:13:39.902601327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:39.902982 containerd[1469]: time="2025-05-08T00:13:39.902726477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:39.903163 containerd[1469]: time="2025-05-08T00:13:39.903037738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:39.903785 containerd[1469]: time="2025-05-08T00:13:39.903666938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:39.906053 containerd[1469]: time="2025-05-08T00:13:39.905889429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:39.906274 containerd[1469]: time="2025-05-08T00:13:39.906058469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:39.906274 containerd[1469]: time="2025-05-08T00:13:39.906077739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:39.906274 containerd[1469]: time="2025-05-08T00:13:39.906182919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:39.907061 containerd[1469]: time="2025-05-08T00:13:39.906233199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:39.907061 containerd[1469]: time="2025-05-08T00:13:39.906636439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:39.907061 containerd[1469]: time="2025-05-08T00:13:39.906652049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:39.907061 containerd[1469]: time="2025-05-08T00:13:39.907006530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:39.942772 systemd[1]: Started cri-containerd-68318cd57f0d4b2b7c63e98c9a2a55e363470a9e3244542d27273ac777d6b868.scope - libcontainer container 68318cd57f0d4b2b7c63e98c9a2a55e363470a9e3244542d27273ac777d6b868. May 8 00:13:39.945805 systemd[1]: Started cri-containerd-c37ac65bfcb33e0b6e40d5082869464b48c3ce3b2a35377d9e54266650d5df29.scope - libcontainer container c37ac65bfcb33e0b6e40d5082869464b48c3ce3b2a35377d9e54266650d5df29. May 8 00:13:39.948633 systemd[1]: Started cri-containerd-fe6a80224ea32e6953edf82340d9177be6e43487d7055cedfa9def603af38308.scope - libcontainer container fe6a80224ea32e6953edf82340d9177be6e43487d7055cedfa9def603af38308. May 8 00:13:40.023166 containerd[1469]: time="2025-05-08T00:13:40.023109308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-9-214,Uid:0069c317cb6a02ccea5626eab3f60f82,Namespace:kube-system,Attempt:0,} returns sandbox id \"68318cd57f0d4b2b7c63e98c9a2a55e363470a9e3244542d27273ac777d6b868\"" May 8 00:13:40.027387 kubelet[2238]: E0508 00:13:40.026897 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:40.038072 containerd[1469]: time="2025-05-08T00:13:40.037870875Z" level=info msg="CreateContainer within sandbox \"68318cd57f0d4b2b7c63e98c9a2a55e363470a9e3244542d27273ac777d6b868\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:13:40.044391 containerd[1469]: time="2025-05-08T00:13:40.043911638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-9-214,Uid:e3c46f8876d5b782136b9be3963f1566,Namespace:kube-system,Attempt:0,} returns sandbox id \"c37ac65bfcb33e0b6e40d5082869464b48c3ce3b2a35377d9e54266650d5df29\"" May 8 00:13:40.047306 kubelet[2238]: E0508 00:13:40.047286 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:40.048727 containerd[1469]: time="2025-05-08T00:13:40.048000650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-9-214,Uid:e73ec5816fcab21098f812ab1ebd55d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe6a80224ea32e6953edf82340d9177be6e43487d7055cedfa9def603af38308\"" May 8 00:13:40.050842 kubelet[2238]: E0508 00:13:40.050818 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:40.052177 containerd[1469]: time="2025-05-08T00:13:40.052120702Z" level=info msg="CreateContainer within sandbox \"c37ac65bfcb33e0b6e40d5082869464b48c3ce3b2a35377d9e54266650d5df29\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:13:40.056008 containerd[1469]: time="2025-05-08T00:13:40.055940814Z" level=info msg="CreateContainer within sandbox \"fe6a80224ea32e6953edf82340d9177be6e43487d7055cedfa9def603af38308\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:13:40.056688 kubelet[2238]: E0508 00:13:40.056628 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.9.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-9-214?timeout=10s\": dial tcp 172.232.9.214:6443: connect: connection refused" interval="1.6s" May 8 00:13:40.072134 containerd[1469]: time="2025-05-08T00:13:40.072026162Z" level=info msg="CreateContainer within sandbox \"68318cd57f0d4b2b7c63e98c9a2a55e363470a9e3244542d27273ac777d6b868\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b07dc44730cb2f74a796ab9ab220ec7c2a684c21929435be6740a7aff9c3c514\"" May 8 00:13:40.074272 containerd[1469]: time="2025-05-08T00:13:40.072747062Z" level=info msg="StartContainer for \"b07dc44730cb2f74a796ab9ab220ec7c2a684c21929435be6740a7aff9c3c514\"" May 8 00:13:40.076949 containerd[1469]: time="2025-05-08T00:13:40.076860364Z" level=info msg="CreateContainer within sandbox \"fe6a80224ea32e6953edf82340d9177be6e43487d7055cedfa9def603af38308\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df724d083bf196a2edbf2a609951f3aba0d0c1fb33a50727898cd5e28fec28f2\"" May 8 00:13:40.077430 containerd[1469]: time="2025-05-08T00:13:40.077402775Z" level=info msg="StartContainer for \"df724d083bf196a2edbf2a609951f3aba0d0c1fb33a50727898cd5e28fec28f2\"" May 8 00:13:40.080667 containerd[1469]: time="2025-05-08T00:13:40.080594206Z" level=info msg="CreateContainer within sandbox \"c37ac65bfcb33e0b6e40d5082869464b48c3ce3b2a35377d9e54266650d5df29\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6ff2b9d3a09ea887de35c64a0bf12f0729ac81eca66ebfcc02989d990a974389\"" May 8 00:13:40.081584 containerd[1469]: time="2025-05-08T00:13:40.081561307Z" level=info msg="StartContainer for \"6ff2b9d3a09ea887de35c64a0bf12f0729ac81eca66ebfcc02989d990a974389\"" May 8 00:13:40.126713 systemd[1]: Started cri-containerd-b07dc44730cb2f74a796ab9ab220ec7c2a684c21929435be6740a7aff9c3c514.scope - libcontainer container b07dc44730cb2f74a796ab9ab220ec7c2a684c21929435be6740a7aff9c3c514. May 8 00:13:40.136747 systemd[1]: Started cri-containerd-6ff2b9d3a09ea887de35c64a0bf12f0729ac81eca66ebfcc02989d990a974389.scope - libcontainer container 6ff2b9d3a09ea887de35c64a0bf12f0729ac81eca66ebfcc02989d990a974389. May 8 00:13:40.146991 systemd[1]: Started cri-containerd-df724d083bf196a2edbf2a609951f3aba0d0c1fb33a50727898cd5e28fec28f2.scope - libcontainer container df724d083bf196a2edbf2a609951f3aba0d0c1fb33a50727898cd5e28fec28f2. May 8 00:13:40.209038 containerd[1469]: time="2025-05-08T00:13:40.208749330Z" level=info msg="StartContainer for \"b07dc44730cb2f74a796ab9ab220ec7c2a684c21929435be6740a7aff9c3c514\" returns successfully" May 8 00:13:40.230228 containerd[1469]: time="2025-05-08T00:13:40.229950841Z" level=info msg="StartContainer for \"6ff2b9d3a09ea887de35c64a0bf12f0729ac81eca66ebfcc02989d990a974389\" returns successfully" May 8 00:13:40.241950 containerd[1469]: time="2025-05-08T00:13:40.241903997Z" level=info msg="StartContainer for \"df724d083bf196a2edbf2a609951f3aba0d0c1fb33a50727898cd5e28fec28f2\" returns successfully" May 8 00:13:40.252942 kubelet[2238]: I0508 00:13:40.252919 2238 kubelet_node_status.go:76] "Attempting to register node" node="172-232-9-214" May 8 00:13:40.254697 kubelet[2238]: E0508 00:13:40.254673 2238 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.232.9.214:6443/api/v1/nodes\": dial tcp 172.232.9.214:6443: connect: connection refused" node="172-232-9-214" May 8 00:13:40.659573 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 8 00:13:40.699566 kubelet[2238]: E0508 00:13:40.699352 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-9-214\" not found" node="172-232-9-214" May 8 00:13:40.699566 kubelet[2238]: E0508 00:13:40.699490 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:40.699933 kubelet[2238]: E0508 00:13:40.699573 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-9-214\" not found" node="172-232-9-214" May 8 00:13:40.700834 kubelet[2238]: E0508 00:13:40.700092 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:40.704390 kubelet[2238]: E0508 00:13:40.704167 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-9-214\" not found" node="172-232-9-214" May 8 00:13:40.704545 kubelet[2238]: E0508 00:13:40.704518 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:41.663361 kubelet[2238]: E0508 00:13:41.663276 2238 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-9-214\" not found" node="172-232-9-214" May 8 00:13:41.704829 kubelet[2238]: E0508 00:13:41.704523 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-9-214\" not found" node="172-232-9-214" May 8 00:13:41.704829 kubelet[2238]: E0508 00:13:41.704538 2238 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-9-214\" not found" node="172-232-9-214" May 8 00:13:41.704829 kubelet[2238]: E0508 00:13:41.704685 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:41.704829 kubelet[2238]: E0508 00:13:41.704701 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:41.857068 kubelet[2238]: I0508 00:13:41.857039 2238 kubelet_node_status.go:76] "Attempting to register node" node="172-232-9-214" May 8 00:13:41.862545 kubelet[2238]: I0508 00:13:41.862514 2238 kubelet_node_status.go:79] "Successfully registered node" node="172-232-9-214" May 8 00:13:41.953380 kubelet[2238]: I0508 00:13:41.952988 2238 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-9-214" May 8 00:13:41.958924 kubelet[2238]: E0508 00:13:41.958860 2238 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-9-214\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-9-214" May 8 00:13:41.958924 kubelet[2238]: I0508 00:13:41.958900 2238 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-9-214" May 8 00:13:41.961167 kubelet[2238]: E0508 00:13:41.961113 2238 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-232-9-214\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-232-9-214" May 8 00:13:41.961167 kubelet[2238]: I0508 00:13:41.961159 2238 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-9-214" May 8 00:13:41.963265 kubelet[2238]: E0508 00:13:41.963234 2238 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-9-214\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-232-9-214" May 8 00:13:42.631321 kubelet[2238]: I0508 00:13:42.630912 2238 apiserver.go:52] "Watching apiserver" May 8 00:13:42.652252 kubelet[2238]: I0508 00:13:42.652203 2238 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:13:43.446458 kubelet[2238]: I0508 00:13:43.446415 2238 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-9-214" May 8 00:13:43.452775 kubelet[2238]: E0508 00:13:43.452729 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:43.620670 systemd[1]: Reload requested from client PID 2516 ('systemctl') (unit session-7.scope)... May 8 00:13:43.620693 systemd[1]: Reloading... May 8 00:13:43.707847 kubelet[2238]: E0508 00:13:43.707723 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:43.759668 zram_generator::config[2563]: No configuration found. May 8 00:13:43.878189 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:13:43.984108 systemd[1]: Reloading finished in 363 ms. May 8 00:13:44.010983 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:44.020161 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:13:44.020524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:44.020581 systemd[1]: kubelet.service: Consumed 936ms CPU time, 126.7M memory peak. May 8 00:13:44.025958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:13:44.194759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:13:44.205194 (kubelet)[2611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:13:44.255020 kubelet[2611]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:13:44.255020 kubelet[2611]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:13:44.255020 kubelet[2611]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:13:44.255020 kubelet[2611]: I0508 00:13:44.254704 2611 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:13:44.263548 kubelet[2611]: I0508 00:13:44.263509 2611 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:13:44.263548 kubelet[2611]: I0508 00:13:44.263530 2611 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:13:44.263803 kubelet[2611]: I0508 00:13:44.263777 2611 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:13:44.264896 kubelet[2611]: I0508 00:13:44.264872 2611 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:13:44.267440 kubelet[2611]: I0508 00:13:44.267021 2611 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:13:44.270438 kubelet[2611]: E0508 00:13:44.270376 2611 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:13:44.270489 kubelet[2611]: I0508 00:13:44.270451 2611 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:13:44.274322 kubelet[2611]: I0508 00:13:44.274292 2611 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:13:44.274516 kubelet[2611]: I0508 00:13:44.274484 2611 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:13:44.274669 kubelet[2611]: I0508 00:13:44.274510 2611 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-9-214","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:13:44.274669 kubelet[2611]: I0508 00:13:44.274669 2611 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:13:44.274773 kubelet[2611]: I0508 00:13:44.274678 2611 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:13:44.274773 kubelet[2611]: I0508 00:13:44.274716 2611 state_mem.go:36] "Initialized new in-memory state store" May 8 00:13:44.274876 kubelet[2611]: I0508 00:13:44.274862 2611 kubelet.go:446] "Attempting to sync node with API server" May 8 00:13:44.274920 kubelet[2611]: I0508 00:13:44.274882 2611 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:13:44.274920 kubelet[2611]: I0508 00:13:44.274902 2611 kubelet.go:352] "Adding apiserver pod source" May 8 00:13:44.274920 kubelet[2611]: I0508 00:13:44.274910 2611 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:13:44.279645 kubelet[2611]: I0508 00:13:44.279581 2611 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:13:44.280857 kubelet[2611]: I0508 00:13:44.280189 2611 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:13:44.281077 kubelet[2611]: I0508 00:13:44.281065 2611 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:13:44.281147 kubelet[2611]: I0508 00:13:44.281138 2611 server.go:1287] "Started kubelet" May 8 00:13:44.288372 kubelet[2611]: I0508 00:13:44.288185 2611 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:13:44.288749 kubelet[2611]: I0508 00:13:44.288708 2611 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:13:44.289073 kubelet[2611]: I0508 00:13:44.289057 2611 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:13:44.290099 kubelet[2611]: I0508 00:13:44.289592 2611 server.go:490] "Adding debug handlers to kubelet server" May 8 00:13:44.292110 kubelet[2611]: I0508 00:13:44.292082 2611 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:13:44.296137 kubelet[2611]: E0508 00:13:44.296120 2611 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:13:44.297024 kubelet[2611]: I0508 00:13:44.296885 2611 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:13:44.301876 kubelet[2611]: I0508 00:13:44.301861 2611 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:13:44.302023 kubelet[2611]: I0508 00:13:44.302012 2611 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:13:44.302174 kubelet[2611]: I0508 00:13:44.302162 2611 reconciler.go:26] "Reconciler: start to sync state" May 8 00:13:44.302767 kubelet[2611]: I0508 00:13:44.302752 2611 factory.go:221] Registration of the systemd container factory successfully May 8 00:13:44.302912 kubelet[2611]: I0508 00:13:44.302894 2611 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:13:44.304321 kubelet[2611]: I0508 00:13:44.304307 2611 factory.go:221] Registration of the containerd container factory successfully May 8 00:13:44.307583 kubelet[2611]: I0508 00:13:44.307525 2611 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:13:44.309694 kubelet[2611]: I0508 00:13:44.309673 2611 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:13:44.309745 kubelet[2611]: I0508 00:13:44.309700 2611 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:13:44.309745 kubelet[2611]: I0508 00:13:44.309718 2611 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:13:44.309745 kubelet[2611]: I0508 00:13:44.309725 2611 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:13:44.309820 kubelet[2611]: E0508 00:13:44.309773 2611 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:13:44.378200 kubelet[2611]: I0508 00:13:44.377212 2611 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:13:44.378200 kubelet[2611]: I0508 00:13:44.377228 2611 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:13:44.378200 kubelet[2611]: I0508 00:13:44.377246 2611 state_mem.go:36] "Initialized new in-memory state store" May 8 00:13:44.378200 kubelet[2611]: I0508 00:13:44.377382 2611 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:13:44.378200 kubelet[2611]: I0508 00:13:44.377393 2611 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:13:44.378200 kubelet[2611]: I0508 00:13:44.377409 2611 policy_none.go:49] "None policy: Start" May 8 00:13:44.378200 kubelet[2611]: I0508 00:13:44.377418 2611 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:13:44.378200 kubelet[2611]: I0508 00:13:44.377430 2611 state_mem.go:35] "Initializing new in-memory state store" May 8 00:13:44.378200 kubelet[2611]: I0508 00:13:44.377511 2611 state_mem.go:75] "Updated machine memory state" May 8 00:13:44.383761 kubelet[2611]: I0508 00:13:44.383744 2611 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:13:44.383982 kubelet[2611]: I0508 00:13:44.383959 2611 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:13:44.384649 kubelet[2611]: I0508 00:13:44.384622 2611 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:13:44.385703 kubelet[2611]: I0508 00:13:44.385588 2611 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:13:44.389202 kubelet[2611]: E0508 00:13:44.389117 2611 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:13:44.413231 kubelet[2611]: I0508 00:13:44.412920 2611 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-9-214" May 8 00:13:44.413990 kubelet[2611]: I0508 00:13:44.413972 2611 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-9-214" May 8 00:13:44.415528 kubelet[2611]: I0508 00:13:44.414199 2611 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-9-214" May 8 00:13:44.427976 kubelet[2611]: E0508 00:13:44.427940 2611 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-9-214\" already exists" pod="kube-system/kube-apiserver-172-232-9-214" May 8 00:13:44.498349 kubelet[2611]: I0508 00:13:44.498068 2611 kubelet_node_status.go:76] "Attempting to register node" node="172-232-9-214" May 8 00:13:44.503164 kubelet[2611]: I0508 00:13:44.503095 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e73ec5816fcab21098f812ab1ebd55d5-kubeconfig\") pod \"kube-controller-manager-172-232-9-214\" (UID: \"e73ec5816fcab21098f812ab1ebd55d5\") " pod="kube-system/kube-controller-manager-172-232-9-214" May 8 00:13:44.503221 kubelet[2611]: I0508 00:13:44.503172 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3c46f8876d5b782136b9be3963f1566-ca-certs\") pod \"kube-apiserver-172-232-9-214\" (UID: \"e3c46f8876d5b782136b9be3963f1566\") " pod="kube-system/kube-apiserver-172-232-9-214" May 8 00:13:44.504002 kubelet[2611]: I0508 00:13:44.503244 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3c46f8876d5b782136b9be3963f1566-k8s-certs\") pod \"kube-apiserver-172-232-9-214\" (UID: \"e3c46f8876d5b782136b9be3963f1566\") " pod="kube-system/kube-apiserver-172-232-9-214" May 8 00:13:44.504002 kubelet[2611]: I0508 00:13:44.503300 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3c46f8876d5b782136b9be3963f1566-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-9-214\" (UID: \"e3c46f8876d5b782136b9be3963f1566\") " pod="kube-system/kube-apiserver-172-232-9-214" May 8 00:13:44.504002 kubelet[2611]: I0508 00:13:44.503320 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e73ec5816fcab21098f812ab1ebd55d5-ca-certs\") pod \"kube-controller-manager-172-232-9-214\" (UID: \"e73ec5816fcab21098f812ab1ebd55d5\") " pod="kube-system/kube-controller-manager-172-232-9-214" May 8 00:13:44.504002 kubelet[2611]: I0508 00:13:44.503335 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e73ec5816fcab21098f812ab1ebd55d5-flexvolume-dir\") pod \"kube-controller-manager-172-232-9-214\" (UID: \"e73ec5816fcab21098f812ab1ebd55d5\") " pod="kube-system/kube-controller-manager-172-232-9-214" May 8 00:13:44.504002 kubelet[2611]: I0508 00:13:44.503349 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e73ec5816fcab21098f812ab1ebd55d5-k8s-certs\") pod \"kube-controller-manager-172-232-9-214\" (UID: \"e73ec5816fcab21098f812ab1ebd55d5\") " pod="kube-system/kube-controller-manager-172-232-9-214" May 8 00:13:44.504146 kubelet[2611]: I0508 00:13:44.503376 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e73ec5816fcab21098f812ab1ebd55d5-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-9-214\" (UID: \"e73ec5816fcab21098f812ab1ebd55d5\") " pod="kube-system/kube-controller-manager-172-232-9-214" May 8 00:13:44.504146 kubelet[2611]: I0508 00:13:44.503593 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0069c317cb6a02ccea5626eab3f60f82-kubeconfig\") pod \"kube-scheduler-172-232-9-214\" (UID: \"0069c317cb6a02ccea5626eab3f60f82\") " pod="kube-system/kube-scheduler-172-232-9-214" May 8 00:13:44.506684 kubelet[2611]: I0508 00:13:44.505216 2611 kubelet_node_status.go:125] "Node was previously registered" node="172-232-9-214" May 8 00:13:44.506684 kubelet[2611]: I0508 00:13:44.505477 2611 kubelet_node_status.go:79] "Successfully registered node" node="172-232-9-214" May 8 00:13:44.619795 sudo[2644]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:13:44.620192 sudo[2644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 00:13:44.724107 kubelet[2611]: E0508 00:13:44.722968 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:44.724400 kubelet[2611]: E0508 00:13:44.724372 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:44.728291 kubelet[2611]: E0508 00:13:44.728274 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:45.156880 sudo[2644]: pam_unix(sudo:session): session closed for user root May 8 00:13:45.276160 kubelet[2611]: I0508 00:13:45.276101 2611 apiserver.go:52] "Watching apiserver" May 8 00:13:45.302938 kubelet[2611]: I0508 00:13:45.302895 2611 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:13:45.341366 kubelet[2611]: I0508 00:13:45.341312 2611 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-9-214" May 8 00:13:45.343645 kubelet[2611]: I0508 00:13:45.342442 2611 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-9-214" May 8 00:13:45.343645 kubelet[2611]: E0508 00:13:45.343197 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:45.355113 kubelet[2611]: E0508 00:13:45.355078 2611 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-9-214\" already exists" pod="kube-system/kube-apiserver-172-232-9-214" May 8 00:13:45.355236 kubelet[2611]: E0508 00:13:45.355206 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:45.355793 kubelet[2611]: E0508 00:13:45.355766 2611 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-9-214\" already exists" pod="kube-system/kube-scheduler-172-232-9-214" May 8 00:13:45.356032 kubelet[2611]: E0508 00:13:45.355903 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:45.380642 kubelet[2611]: I0508 00:13:45.380575 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-9-214" podStartSLOduration=1.380562005 podStartE2EDuration="1.380562005s" podCreationTimestamp="2025-05-08 00:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:45.380050264 +0000 UTC m=+1.168811335" watchObservedRunningTime="2025-05-08 00:13:45.380562005 +0000 UTC m=+1.169323076" May 8 00:13:45.393726 kubelet[2611]: I0508 00:13:45.392679 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-9-214" podStartSLOduration=2.392669561 podStartE2EDuration="2.392669561s" podCreationTimestamp="2025-05-08 00:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:45.39189801 +0000 UTC m=+1.180659081" watchObservedRunningTime="2025-05-08 00:13:45.392669561 +0000 UTC m=+1.181430632" May 8 00:13:45.399561 kubelet[2611]: I0508 00:13:45.398986 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-9-214" podStartSLOduration=1.398977994 podStartE2EDuration="1.398977994s" podCreationTimestamp="2025-05-08 00:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:45.398887084 +0000 UTC m=+1.187648155" watchObservedRunningTime="2025-05-08 00:13:45.398977994 +0000 UTC m=+1.187739065" May 8 00:13:46.343394 kubelet[2611]: E0508 00:13:46.343330 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:46.343877 kubelet[2611]: E0508 00:13:46.343801 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:46.344677 kubelet[2611]: E0508 00:13:46.344059 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:46.500834 sudo[1700]: pam_unix(sudo:session): session closed for user root May 8 00:13:46.552179 sshd[1699]: Connection closed by 139.178.89.65 port 56136 May 8 00:13:46.552897 sshd-session[1697]: pam_unix(sshd:session): session closed for user core May 8 00:13:46.559600 systemd[1]: sshd@6-172.232.9.214:22-139.178.89.65:56136.service: Deactivated successfully. May 8 00:13:46.563061 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:13:46.563337 systemd[1]: session-7.scope: Consumed 4.076s CPU time, 260.7M memory peak. May 8 00:13:46.565221 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. May 8 00:13:46.566554 systemd-logind[1455]: Removed session 7. May 8 00:13:46.656582 kubelet[2611]: E0508 00:13:46.656201 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:13:48.901898 kubelet[2611]: I0508 00:13:48.901852 2611 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:13:48.902740 containerd[1469]: time="2025-05-08T00:13:48.902672284Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:13:48.903298 kubelet[2611]: I0508 00:13:48.902952 2611 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:13:50.929860 systemd-timesyncd[1393]: Contacted time server [2602:ff06:725:100::123]:123 (2.flatcar.pool.ntp.org). May 8 00:13:50.929936 systemd-timesyncd[1393]: Initial clock synchronization to Thu 2025-05-08 00:13:50.929577 UTC. May 8 00:13:50.930007 systemd-resolved[1391]: Clock change detected. Flushing caches. May 8 00:13:51.015747 systemd[1]: Created slice kubepods-besteffort-pod0b5a7e35_61a1_49ce_bfb9_aae6aa5c4247.slice - libcontainer container kubepods-besteffort-pod0b5a7e35_61a1_49ce_bfb9_aae6aa5c4247.slice. May 8 00:13:51.059516 systemd[1]: Created slice kubepods-burstable-pode9dc8683_e723_4c18_836e_51cdf78442d6.slice - libcontainer container kubepods-burstable-pode9dc8683_e723_4c18_836e_51cdf78442d6.slice. May 8 00:13:51.074640 kubelet[2611]: W0508 00:13:51.070410 2611 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172-232-9-214" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-232-9-214' and this object May 8 00:13:51.074640 kubelet[2611]: E0508 00:13:51.070700 2611 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:172-232-9-214\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-232-9-214' and this object" logger="UnhandledError" May 8 00:13:51.106225 kubelet[2611]: I0508 00:13:51.106188 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-cni-path\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.106389 kubelet[2611]: I0508 00:13:51.106375 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9dc8683-e723-4c18-836e-51cdf78442d6-hubble-tls\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.106513 kubelet[2611]: I0508 00:13:51.106499 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-hostproc\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.106613 kubelet[2611]: I0508 00:13:51.106583 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9dc8683-e723-4c18-836e-51cdf78442d6-cilium-config-path\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.106767 kubelet[2611]: I0508 00:13:51.106750 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ktmw\" (UniqueName: \"kubernetes.io/projected/0b5a7e35-61a1-49ce-bfb9-aae6aa5c4247-kube-api-access-7ktmw\") pod \"kube-proxy-v6lxv\" (UID: \"0b5a7e35-61a1-49ce-bfb9-aae6aa5c4247\") " pod="kube-system/kube-proxy-v6lxv" May 8 00:13:51.106855 kubelet[2611]: I0508 00:13:51.106841 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-cilium-cgroup\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.106930 kubelet[2611]: I0508 00:13:51.106919 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-etc-cni-netd\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.107223 kubelet[2611]: I0508 00:13:51.107210 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-host-proc-sys-net\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.107301 kubelet[2611]: I0508 00:13:51.107289 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sxc6\" (UniqueName: \"kubernetes.io/projected/e9dc8683-e723-4c18-836e-51cdf78442d6-kube-api-access-5sxc6\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.107374 kubelet[2611]: I0508 00:13:51.107362 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-xtables-lock\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.107473 kubelet[2611]: I0508 00:13:51.107460 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b5a7e35-61a1-49ce-bfb9-aae6aa5c4247-xtables-lock\") pod \"kube-proxy-v6lxv\" (UID: \"0b5a7e35-61a1-49ce-bfb9-aae6aa5c4247\") " pod="kube-system/kube-proxy-v6lxv" May 8 00:13:51.107752 kubelet[2611]: I0508 00:13:51.107738 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-cilium-run\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.107923 kubelet[2611]: I0508 00:13:51.107909 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9dc8683-e723-4c18-836e-51cdf78442d6-clustermesh-secrets\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.109832 kubelet[2611]: I0508 00:13:51.109768 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b5a7e35-61a1-49ce-bfb9-aae6aa5c4247-lib-modules\") pod \"kube-proxy-v6lxv\" (UID: \"0b5a7e35-61a1-49ce-bfb9-aae6aa5c4247\") " pod="kube-system/kube-proxy-v6lxv" May 8 00:13:51.110136 kubelet[2611]: I0508 00:13:51.109965 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-bpf-maps\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.110265 kubelet[2611]: I0508 00:13:51.110250 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-lib-modules\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.110483 kubelet[2611]: I0508 00:13:51.110346 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-host-proc-sys-kernel\") pod \"cilium-g8vjj\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " pod="kube-system/cilium-g8vjj" May 8 00:13:51.111335 kubelet[2611]: I0508 00:13:51.111316 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b5a7e35-61a1-49ce-bfb9-aae6aa5c4247-kube-proxy\") pod \"kube-proxy-v6lxv\" (UID: \"0b5a7e35-61a1-49ce-bfb9-aae6aa5c4247\") " pod="kube-system/kube-proxy-v6lxv" May 8 00:13:51.158377 kubelet[2611]: I0508 00:13:51.158332 2611 status_manager.go:890] "Failed to get status for pod" podUID="eb62ec72-25d7-41e5-9058-ae1e3a53b2e7" pod="kube-system/cilium-operator-6c4d7847fc-vd5k8" err="pods \"cilium-operator-6c4d7847fc-vd5k8\" is forbidden: User \"system:node:172-232-9-214\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-232-9-214' and this object" May 8 00:13:51.160035 systemd[1]: Created slice kubepods-besteffort-podeb62ec72_25d7_41e5_9058_ae1e3a53b2e7.slice - libcontainer container kubepods-besteffort-podeb62ec72_25d7_41e5_9058_ae1e3a53b2e7.slice. May 8 00:13:51.213407 kubelet[2611]: I0508 00:13:51.212530 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb62ec72-25d7-41e5-9058-ae1e3a53b2e7-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vd5k8\" (UID: \"eb62ec72-25d7-41e5-9058-ae1e3a53b2e7\") " pod="kube-system/cilium-operator-6c4d7847fc-vd5k8" May 8 00:13:51.213407 kubelet[2611]: I0508 00:13:51.212694 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn47x\" (UniqueName: \"kubernetes.io/projected/eb62ec72-25d7-41e5-9058-ae1e3a53b2e7-kube-api-access-kn47x\") pod \"cilium-operator-6c4d7847fc-vd5k8\" (UID: \"eb62ec72-25d7-41e5-9058-ae1e3a53b2e7\") " pod="kube-system/cilium-operator-6c4d7847fc-vd5k8" May 8 00:13:51.325790 kubelet[2611]: E0508 00:13:51.325755 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:51.328085 containerd[1469]: time="2025-05-08T00:13:51.328017500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v6lxv,Uid:0b5a7e35-61a1-49ce-bfb9-aae6aa5c4247,Namespace:kube-system,Attempt:0,}" May 8 00:13:51.354222 containerd[1469]: time="2025-05-08T00:13:51.354035043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:51.354222 containerd[1469]: time="2025-05-08T00:13:51.354090953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:51.354222 containerd[1469]: time="2025-05-08T00:13:51.354104023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:51.354222 containerd[1469]: time="2025-05-08T00:13:51.354175643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:51.374740 systemd[1]: Started cri-containerd-a9f437f07cbdb21c649840c9dd08b15dd81b05e81afafbd440253c898d7ccbd6.scope - libcontainer container a9f437f07cbdb21c649840c9dd08b15dd81b05e81afafbd440253c898d7ccbd6. May 8 00:13:51.399766 containerd[1469]: time="2025-05-08T00:13:51.399430096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v6lxv,Uid:0b5a7e35-61a1-49ce-bfb9-aae6aa5c4247,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9f437f07cbdb21c649840c9dd08b15dd81b05e81afafbd440253c898d7ccbd6\"" May 8 00:13:51.401237 kubelet[2611]: E0508 00:13:51.400744 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:51.405020 containerd[1469]: time="2025-05-08T00:13:51.404999509Z" level=info msg="CreateContainer within sandbox \"a9f437f07cbdb21c649840c9dd08b15dd81b05e81afafbd440253c898d7ccbd6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:13:51.419916 containerd[1469]: time="2025-05-08T00:13:51.419893856Z" level=info msg="CreateContainer within sandbox \"a9f437f07cbdb21c649840c9dd08b15dd81b05e81afafbd440253c898d7ccbd6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4813413036405394922ab5a76b04f72b24835dae266cf27072f65b0e915d9368\"" May 8 00:13:51.420481 containerd[1469]: time="2025-05-08T00:13:51.420418376Z" level=info msg="StartContainer for \"4813413036405394922ab5a76b04f72b24835dae266cf27072f65b0e915d9368\"" May 8 00:13:51.451738 systemd[1]: Started cri-containerd-4813413036405394922ab5a76b04f72b24835dae266cf27072f65b0e915d9368.scope - libcontainer container 4813413036405394922ab5a76b04f72b24835dae266cf27072f65b0e915d9368. May 8 00:13:51.463330 kubelet[2611]: E0508 00:13:51.462979 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:51.463616 containerd[1469]: time="2025-05-08T00:13:51.463524938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vd5k8,Uid:eb62ec72-25d7-41e5-9058-ae1e3a53b2e7,Namespace:kube-system,Attempt:0,}" May 8 00:13:51.491620 containerd[1469]: time="2025-05-08T00:13:51.489642041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:51.491620 containerd[1469]: time="2025-05-08T00:13:51.489695481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:51.491620 containerd[1469]: time="2025-05-08T00:13:51.489707981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:51.491620 containerd[1469]: time="2025-05-08T00:13:51.490114391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:51.501968 containerd[1469]: time="2025-05-08T00:13:51.500996296Z" level=info msg="StartContainer for \"4813413036405394922ab5a76b04f72b24835dae266cf27072f65b0e915d9368\" returns successfully" May 8 00:13:51.523727 systemd[1]: Started cri-containerd-1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05.scope - libcontainer container 1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05. May 8 00:13:51.577294 containerd[1469]: time="2025-05-08T00:13:51.577024474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vd5k8,Uid:eb62ec72-25d7-41e5-9058-ae1e3a53b2e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\"" May 8 00:13:51.578569 kubelet[2611]: E0508 00:13:51.578509 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:51.579972 containerd[1469]: time="2025-05-08T00:13:51.579764376Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:13:52.217265 kubelet[2611]: E0508 00:13:52.217201 2611 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 8 00:13:52.217265 kubelet[2611]: E0508 00:13:52.217248 2611 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-g8vjj: failed to sync secret cache: timed out waiting for the condition May 8 00:13:52.218975 kubelet[2611]: E0508 00:13:52.217353 2611 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9dc8683-e723-4c18-836e-51cdf78442d6-hubble-tls podName:e9dc8683-e723-4c18-836e-51cdf78442d6 nodeName:}" failed. No retries permitted until 2025-05-08 00:13:52.717327224 +0000 UTC m=+7.445101061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/e9dc8683-e723-4c18-836e-51cdf78442d6-hubble-tls") pod "cilium-g8vjj" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6") : failed to sync secret cache: timed out waiting for the condition May 8 00:13:52.272221 kubelet[2611]: E0508 00:13:52.271958 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:52.414916 kubelet[2611]: E0508 00:13:52.414877 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:52.417125 kubelet[2611]: E0508 00:13:52.417082 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:52.441817 kubelet[2611]: I0508 00:13:52.441516 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v6lxv" podStartSLOduration=2.441502216 podStartE2EDuration="2.441502216s" podCreationTimestamp="2025-05-08 00:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:13:52.429965891 +0000 UTC m=+7.157739728" watchObservedRunningTime="2025-05-08 00:13:52.441502216 +0000 UTC m=+7.169276053" May 8 00:13:52.864520 kubelet[2611]: E0508 00:13:52.864268 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:52.866167 containerd[1469]: time="2025-05-08T00:13:52.865675928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g8vjj,Uid:e9dc8683-e723-4c18-836e-51cdf78442d6,Namespace:kube-system,Attempt:0,}" May 8 00:13:52.894839 containerd[1469]: time="2025-05-08T00:13:52.894082563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:13:52.894839 containerd[1469]: time="2025-05-08T00:13:52.894245593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:13:52.894839 containerd[1469]: time="2025-05-08T00:13:52.894258623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:52.894839 containerd[1469]: time="2025-05-08T00:13:52.894351433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:13:52.926776 systemd[1]: Started cri-containerd-6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407.scope - libcontainer container 6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407. May 8 00:13:52.957205 containerd[1469]: time="2025-05-08T00:13:52.957148294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g8vjj,Uid:e9dc8683-e723-4c18-836e-51cdf78442d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\"" May 8 00:13:52.959076 kubelet[2611]: E0508 00:13:52.958561 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:53.302772 containerd[1469]: time="2025-05-08T00:13:53.301995406Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:53.304628 containerd[1469]: time="2025-05-08T00:13:53.303260807Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 8 00:13:53.305897 containerd[1469]: time="2025-05-08T00:13:53.305757658Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:53.307328 containerd[1469]: time="2025-05-08T00:13:53.307291099Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.727483733s" May 8 00:13:53.307391 containerd[1469]: time="2025-05-08T00:13:53.307329149Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:13:53.309499 containerd[1469]: time="2025-05-08T00:13:53.309461560Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:13:53.310895 containerd[1469]: time="2025-05-08T00:13:53.310786731Z" level=info msg="CreateContainer within sandbox \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:13:53.333749 containerd[1469]: time="2025-05-08T00:13:53.333708782Z" level=info msg="CreateContainer within sandbox \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\"" May 8 00:13:53.334556 containerd[1469]: time="2025-05-08T00:13:53.334522773Z" level=info msg="StartContainer for \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\"" May 8 00:13:53.367725 systemd[1]: Started cri-containerd-c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816.scope - libcontainer container c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816. May 8 00:13:53.399386 containerd[1469]: time="2025-05-08T00:13:53.399338045Z" level=info msg="StartContainer for \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\" returns successfully" May 8 00:13:53.420920 kubelet[2611]: E0508 00:13:53.420881 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:53.422940 kubelet[2611]: E0508 00:13:53.422915 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:53.898642 kubelet[2611]: E0508 00:13:53.898519 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:13:54.424290 kubelet[2611]: E0508 00:13:54.424246 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:54.663177 kubelet[2611]: E0508 00:13:54.661181 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:54.673441 kubelet[2611]: I0508 00:13:54.673231 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vd5k8" podStartSLOduration=1.944089717 podStartE2EDuration="3.673216911s" podCreationTimestamp="2025-05-08 00:13:51 +0000 UTC" firstStartedPulling="2025-05-08 00:13:51.579324916 +0000 UTC m=+6.307098753" lastFinishedPulling="2025-05-08 00:13:53.30845211 +0000 UTC m=+8.036225947" observedRunningTime="2025-05-08 00:13:53.435023763 +0000 UTC m=+8.162797600" watchObservedRunningTime="2025-05-08 00:13:54.673216911 +0000 UTC m=+9.400990748" May 8 00:13:54.722704 kubelet[2611]: E0508 00:13:54.722586 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:13:55.425925 kubelet[2611]: E0508 00:13:55.425884 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:55.866672 update_engine[1459]: I20250508 00:13:55.866617 1459 update_attempter.cc:509] Updating boot flags... May 8 00:13:55.940831 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (3042) May 8 00:13:56.080687 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (3044) May 8 00:13:56.239569 kubelet[2611]: E0508 00:13:56.239114 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:13:56.427574 kubelet[2611]: E0508 00:13:56.427205 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:56.772406 kubelet[2611]: E0508 00:13:56.772354 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:13:57.884855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332864695.mount: Deactivated successfully. May 8 00:13:59.525030 containerd[1469]: time="2025-05-08T00:13:59.524965046Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:59.526032 containerd[1469]: time="2025-05-08T00:13:59.525979556Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 8 00:13:59.526901 containerd[1469]: time="2025-05-08T00:13:59.526856707Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:13:59.528795 containerd[1469]: time="2025-05-08T00:13:59.528769468Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.219274068s" May 8 00:13:59.528938 containerd[1469]: time="2025-05-08T00:13:59.528864538Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:13:59.532461 containerd[1469]: time="2025-05-08T00:13:59.532421779Z" level=info msg="CreateContainer within sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:13:59.550068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017623004.mount: Deactivated successfully. May 8 00:13:59.551613 containerd[1469]: time="2025-05-08T00:13:59.551286079Z" level=info msg="CreateContainer within sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5\"" May 8 00:13:59.552124 containerd[1469]: time="2025-05-08T00:13:59.552048589Z" level=info msg="StartContainer for \"6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5\"" May 8 00:13:59.581455 systemd[1]: run-containerd-runc-k8s.io-6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5-runc.nLeBoc.mount: Deactivated successfully. May 8 00:13:59.590834 systemd[1]: Started cri-containerd-6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5.scope - libcontainer container 6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5. May 8 00:13:59.622546 containerd[1469]: time="2025-05-08T00:13:59.622489834Z" level=info msg="StartContainer for \"6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5\" returns successfully" May 8 00:13:59.637173 systemd[1]: cri-containerd-6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5.scope: Deactivated successfully. May 8 00:13:59.730442 containerd[1469]: time="2025-05-08T00:13:59.730368048Z" level=info msg="shim disconnected" id=6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5 namespace=k8s.io May 8 00:13:59.730442 containerd[1469]: time="2025-05-08T00:13:59.730420638Z" level=warning msg="cleaning up after shim disconnected" id=6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5 namespace=k8s.io May 8 00:13:59.730442 containerd[1469]: time="2025-05-08T00:13:59.730429508Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:14:00.438301 kubelet[2611]: E0508 00:14:00.438242 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:00.446848 containerd[1469]: time="2025-05-08T00:14:00.446186176Z" level=info msg="CreateContainer within sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:14:00.461945 containerd[1469]: time="2025-05-08T00:14:00.460352733Z" level=info msg="CreateContainer within sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc\"" May 8 00:14:00.464696 containerd[1469]: time="2025-05-08T00:14:00.463544065Z" level=info msg="StartContainer for \"3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc\"" May 8 00:14:00.497885 systemd[1]: Started cri-containerd-3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc.scope - libcontainer container 3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc. May 8 00:14:00.534323 containerd[1469]: time="2025-05-08T00:14:00.534170710Z" level=info msg="StartContainer for \"3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc\" returns successfully" May 8 00:14:00.545857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5-rootfs.mount: Deactivated successfully. May 8 00:14:00.553839 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:14:00.554482 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:14:00.554688 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:14:00.561207 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:14:00.563426 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:14:00.564140 systemd[1]: cri-containerd-3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc.scope: Deactivated successfully. May 8 00:14:00.588802 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:14:00.605486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc-rootfs.mount: Deactivated successfully. May 8 00:14:00.607938 containerd[1469]: time="2025-05-08T00:14:00.607849797Z" level=info msg="shim disconnected" id=3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc namespace=k8s.io May 8 00:14:00.607938 containerd[1469]: time="2025-05-08T00:14:00.607926377Z" level=warning msg="cleaning up after shim disconnected" id=3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc namespace=k8s.io May 8 00:14:00.607938 containerd[1469]: time="2025-05-08T00:14:00.607941837Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:14:01.183351 kubelet[2611]: E0508 00:14:01.183311 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:14:01.441066 kubelet[2611]: E0508 00:14:01.440967 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:01.444143 containerd[1469]: time="2025-05-08T00:14:01.443983515Z" level=info msg="CreateContainer within sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:14:01.463234 containerd[1469]: time="2025-05-08T00:14:01.463192164Z" level=info msg="CreateContainer within sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6\"" May 8 00:14:01.465091 containerd[1469]: time="2025-05-08T00:14:01.463922704Z" level=info msg="StartContainer for \"4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6\"" May 8 00:14:01.497887 systemd[1]: Started cri-containerd-4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6.scope - libcontainer container 4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6. May 8 00:14:01.532667 containerd[1469]: time="2025-05-08T00:14:01.532634979Z" level=info msg="StartContainer for \"4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6\" returns successfully" May 8 00:14:01.538051 systemd[1]: cri-containerd-4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6.scope: Deactivated successfully. May 8 00:14:01.561506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6-rootfs.mount: Deactivated successfully. May 8 00:14:01.565931 containerd[1469]: time="2025-05-08T00:14:01.565845155Z" level=info msg="shim disconnected" id=4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6 namespace=k8s.io May 8 00:14:01.565931 containerd[1469]: time="2025-05-08T00:14:01.565931815Z" level=warning msg="cleaning up after shim disconnected" id=4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6 namespace=k8s.io May 8 00:14:01.565931 containerd[1469]: time="2025-05-08T00:14:01.565940935Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:14:02.444959 kubelet[2611]: E0508 00:14:02.444935 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:02.447943 containerd[1469]: time="2025-05-08T00:14:02.447910896Z" level=info msg="CreateContainer within sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:14:02.462181 containerd[1469]: time="2025-05-08T00:14:02.462153593Z" level=info msg="CreateContainer within sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007\"" May 8 00:14:02.463445 containerd[1469]: time="2025-05-08T00:14:02.462880784Z" level=info msg="StartContainer for \"530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007\"" May 8 00:14:02.497869 systemd[1]: Started cri-containerd-530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007.scope - libcontainer container 530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007. May 8 00:14:02.524701 systemd[1]: cri-containerd-530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007.scope: Deactivated successfully. May 8 00:14:02.525473 containerd[1469]: time="2025-05-08T00:14:02.525442995Z" level=info msg="StartContainer for \"530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007\" returns successfully" May 8 00:14:02.547737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007-rootfs.mount: Deactivated successfully. May 8 00:14:02.556097 containerd[1469]: time="2025-05-08T00:14:02.556033250Z" level=info msg="shim disconnected" id=530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007 namespace=k8s.io May 8 00:14:02.556097 containerd[1469]: time="2025-05-08T00:14:02.556087840Z" level=warning msg="cleaning up after shim disconnected" id=530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007 namespace=k8s.io May 8 00:14:02.556097 containerd[1469]: time="2025-05-08T00:14:02.556097380Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:14:03.448810 kubelet[2611]: E0508 00:14:03.448759 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:03.451194 containerd[1469]: time="2025-05-08T00:14:03.451141677Z" level=info msg="CreateContainer within sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:14:03.469902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1897018913.mount: Deactivated successfully. May 8 00:14:03.471179 containerd[1469]: time="2025-05-08T00:14:03.471140257Z" level=info msg="CreateContainer within sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\"" May 8 00:14:03.474342 containerd[1469]: time="2025-05-08T00:14:03.472411778Z" level=info msg="StartContainer for \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\"" May 8 00:14:03.515073 systemd[1]: Started cri-containerd-0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe.scope - libcontainer container 0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe. May 8 00:14:03.551417 containerd[1469]: time="2025-05-08T00:14:03.551363697Z" level=info msg="StartContainer for \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\" returns successfully" May 8 00:14:03.585200 systemd[1]: run-containerd-runc-k8s.io-0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe-runc.skjOpt.mount: Deactivated successfully. May 8 00:14:03.667425 kubelet[2611]: I0508 00:14:03.667387 2611 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:14:03.703702 systemd[1]: Created slice kubepods-burstable-pod7549b50a_a1f8_4f6e_89d6_a0122ecf4113.slice - libcontainer container kubepods-burstable-pod7549b50a_a1f8_4f6e_89d6_a0122ecf4113.slice. May 8 00:14:03.710791 systemd[1]: Created slice kubepods-burstable-pod4b255c26_f4a5_4ce4_a868_1c2e78a3a55e.slice - libcontainer container kubepods-burstable-pod4b255c26_f4a5_4ce4_a868_1c2e78a3a55e.slice. May 8 00:14:03.811537 kubelet[2611]: I0508 00:14:03.811422 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7549b50a-a1f8-4f6e-89d6-a0122ecf4113-config-volume\") pod \"coredns-668d6bf9bc-vghmh\" (UID: \"7549b50a-a1f8-4f6e-89d6-a0122ecf4113\") " pod="kube-system/coredns-668d6bf9bc-vghmh" May 8 00:14:03.811537 kubelet[2611]: I0508 00:14:03.811466 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b255c26-f4a5-4ce4-a868-1c2e78a3a55e-config-volume\") pod \"coredns-668d6bf9bc-7gfvw\" (UID: \"4b255c26-f4a5-4ce4-a868-1c2e78a3a55e\") " pod="kube-system/coredns-668d6bf9bc-7gfvw" May 8 00:14:03.811537 kubelet[2611]: I0508 00:14:03.811489 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4jcx\" (UniqueName: \"kubernetes.io/projected/7549b50a-a1f8-4f6e-89d6-a0122ecf4113-kube-api-access-t4jcx\") pod \"coredns-668d6bf9bc-vghmh\" (UID: \"7549b50a-a1f8-4f6e-89d6-a0122ecf4113\") " pod="kube-system/coredns-668d6bf9bc-vghmh" May 8 00:14:03.811537 kubelet[2611]: I0508 00:14:03.811509 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqc6k\" (UniqueName: \"kubernetes.io/projected/4b255c26-f4a5-4ce4-a868-1c2e78a3a55e-kube-api-access-bqc6k\") pod \"coredns-668d6bf9bc-7gfvw\" (UID: \"4b255c26-f4a5-4ce4-a868-1c2e78a3a55e\") " pod="kube-system/coredns-668d6bf9bc-7gfvw" May 8 00:14:04.008884 kubelet[2611]: E0508 00:14:04.008487 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:04.009852 containerd[1469]: time="2025-05-08T00:14:04.009802827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vghmh,Uid:7549b50a-a1f8-4f6e-89d6-a0122ecf4113,Namespace:kube-system,Attempt:0,}" May 8 00:14:04.013766 kubelet[2611]: E0508 00:14:04.013146 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:04.014031 containerd[1469]: time="2025-05-08T00:14:04.013996159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7gfvw,Uid:4b255c26-f4a5-4ce4-a868-1c2e78a3a55e,Namespace:kube-system,Attempt:0,}" May 8 00:14:04.456920 kubelet[2611]: E0508 00:14:04.456711 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:04.470701 kubelet[2611]: I0508 00:14:04.470007 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g8vjj" podStartSLOduration=6.901213154 podStartE2EDuration="13.469995006s" podCreationTimestamp="2025-05-08 00:13:51 +0000 UTC" firstStartedPulling="2025-05-08 00:13:52.960660536 +0000 UTC m=+7.688434373" lastFinishedPulling="2025-05-08 00:13:59.529442378 +0000 UTC m=+14.257216225" observedRunningTime="2025-05-08 00:14:04.469316536 +0000 UTC m=+19.197090373" watchObservedRunningTime="2025-05-08 00:14:04.469995006 +0000 UTC m=+19.197768843" May 8 00:14:05.164826 kubelet[2611]: E0508 00:14:05.164752 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:14:05.459066 kubelet[2611]: E0508 00:14:05.458901 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:05.852870 systemd-networkd[1388]: cilium_host: Link UP May 8 00:14:05.854260 systemd-networkd[1388]: cilium_net: Link UP May 8 00:14:05.855424 systemd-networkd[1388]: cilium_net: Gained carrier May 8 00:14:05.855928 systemd-networkd[1388]: cilium_host: Gained carrier May 8 00:14:05.991671 systemd-networkd[1388]: cilium_vxlan: Link UP May 8 00:14:05.991760 systemd-networkd[1388]: cilium_vxlan: Gained carrier May 8 00:14:06.228819 kernel: NET: Registered PF_ALG protocol family May 8 00:14:06.467898 kubelet[2611]: E0508 00:14:06.467818 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:06.587316 systemd-networkd[1388]: cilium_net: Gained IPv6LL May 8 00:14:06.649794 systemd-networkd[1388]: cilium_host: Gained IPv6LL May 8 00:14:06.976870 systemd-networkd[1388]: lxc_health: Link UP May 8 00:14:06.978052 systemd-networkd[1388]: lxc_health: Gained carrier May 8 00:14:07.468366 kubelet[2611]: E0508 00:14:07.468320 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:07.572959 kernel: eth0: renamed from tmp40add May 8 00:14:07.576298 systemd-networkd[1388]: lxce4762cd8ed32: Link UP May 8 00:14:07.578411 systemd-networkd[1388]: lxce4762cd8ed32: Gained carrier May 8 00:14:07.589985 systemd-networkd[1388]: lxc135ff7f412b2: Link UP May 8 00:14:07.600163 kernel: eth0: renamed from tmp70bdd May 8 00:14:07.612207 systemd-networkd[1388]: lxc135ff7f412b2: Gained carrier May 8 00:14:07.867709 systemd-networkd[1388]: cilium_vxlan: Gained IPv6LL May 8 00:14:08.249810 systemd-networkd[1388]: lxc_health: Gained IPv6LL May 8 00:14:08.867648 kubelet[2611]: E0508 00:14:08.866385 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:09.403741 systemd-networkd[1388]: lxce4762cd8ed32: Gained IPv6LL May 8 00:14:09.473731 kubelet[2611]: E0508 00:14:09.472539 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:09.594731 systemd-networkd[1388]: lxc135ff7f412b2: Gained IPv6LL May 8 00:14:10.482059 containerd[1469]: time="2025-05-08T00:14:10.481446880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:14:10.482059 containerd[1469]: time="2025-05-08T00:14:10.481512810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:14:10.482059 containerd[1469]: time="2025-05-08T00:14:10.481531410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:10.482059 containerd[1469]: time="2025-05-08T00:14:10.481645500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:10.522942 systemd[1]: Started cri-containerd-40add64d332287c2c4116b75a5829771cf1679a9d859ee7bf9825d6f7ea0643e.scope - libcontainer container 40add64d332287c2c4116b75a5829771cf1679a9d859ee7bf9825d6f7ea0643e. May 8 00:14:10.589427 containerd[1469]: time="2025-05-08T00:14:10.589312604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vghmh,Uid:7549b50a-a1f8-4f6e-89d6-a0122ecf4113,Namespace:kube-system,Attempt:0,} returns sandbox id \"40add64d332287c2c4116b75a5829771cf1679a9d859ee7bf9825d6f7ea0643e\"" May 8 00:14:10.591383 kubelet[2611]: E0508 00:14:10.590844 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:10.595573 containerd[1469]: time="2025-05-08T00:14:10.595521397Z" level=info msg="CreateContainer within sandbox \"40add64d332287c2c4116b75a5829771cf1679a9d859ee7bf9825d6f7ea0643e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:14:10.623370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2999034233.mount: Deactivated successfully. May 8 00:14:10.624342 containerd[1469]: time="2025-05-08T00:14:10.624279172Z" level=info msg="CreateContainer within sandbox \"40add64d332287c2c4116b75a5829771cf1679a9d859ee7bf9825d6f7ea0643e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c1bdf2639d187932eb02cb8efd646a3c83d2ad7a3ca2d854e26ff07f16d9412a\"" May 8 00:14:10.627024 containerd[1469]: time="2025-05-08T00:14:10.626889943Z" level=info msg="StartContainer for \"c1bdf2639d187932eb02cb8efd646a3c83d2ad7a3ca2d854e26ff07f16d9412a\"" May 8 00:14:10.665782 systemd[1]: Started cri-containerd-c1bdf2639d187932eb02cb8efd646a3c83d2ad7a3ca2d854e26ff07f16d9412a.scope - libcontainer container c1bdf2639d187932eb02cb8efd646a3c83d2ad7a3ca2d854e26ff07f16d9412a. May 8 00:14:10.698364 containerd[1469]: time="2025-05-08T00:14:10.698129878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:14:10.698364 containerd[1469]: time="2025-05-08T00:14:10.698185318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:14:10.698364 containerd[1469]: time="2025-05-08T00:14:10.698197038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:10.698364 containerd[1469]: time="2025-05-08T00:14:10.698275318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:14:10.707937 containerd[1469]: time="2025-05-08T00:14:10.707735633Z" level=info msg="StartContainer for \"c1bdf2639d187932eb02cb8efd646a3c83d2ad7a3ca2d854e26ff07f16d9412a\" returns successfully" May 8 00:14:10.730752 systemd[1]: Started cri-containerd-70bdda1635ea59a504898d9c10ef3150fbcf72335f390c9dafc3ee190b3a70a5.scope - libcontainer container 70bdda1635ea59a504898d9c10ef3150fbcf72335f390c9dafc3ee190b3a70a5. May 8 00:14:10.787740 containerd[1469]: time="2025-05-08T00:14:10.787364483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7gfvw,Uid:4b255c26-f4a5-4ce4-a868-1c2e78a3a55e,Namespace:kube-system,Attempt:0,} returns sandbox id \"70bdda1635ea59a504898d9c10ef3150fbcf72335f390c9dafc3ee190b3a70a5\"" May 8 00:14:10.789203 kubelet[2611]: E0508 00:14:10.788682 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:10.792385 containerd[1469]: time="2025-05-08T00:14:10.792217205Z" level=info msg="CreateContainer within sandbox \"70bdda1635ea59a504898d9c10ef3150fbcf72335f390c9dafc3ee190b3a70a5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:14:10.804068 containerd[1469]: time="2025-05-08T00:14:10.803988831Z" level=info msg="CreateContainer within sandbox \"70bdda1635ea59a504898d9c10ef3150fbcf72335f390c9dafc3ee190b3a70a5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78af3601c9133623210832c6b66a35ba3cc093ed3b465499e1ab07d1fc4fda8e\"" May 8 00:14:10.804558 containerd[1469]: time="2025-05-08T00:14:10.804510782Z" level=info msg="StartContainer for \"78af3601c9133623210832c6b66a35ba3cc093ed3b465499e1ab07d1fc4fda8e\"" May 8 00:14:10.841016 systemd[1]: Started cri-containerd-78af3601c9133623210832c6b66a35ba3cc093ed3b465499e1ab07d1fc4fda8e.scope - libcontainer container 78af3601c9133623210832c6b66a35ba3cc093ed3b465499e1ab07d1fc4fda8e. May 8 00:14:10.882198 containerd[1469]: time="2025-05-08T00:14:10.882132570Z" level=info msg="StartContainer for \"78af3601c9133623210832c6b66a35ba3cc093ed3b465499e1ab07d1fc4fda8e\" returns successfully" May 8 00:14:11.479388 kubelet[2611]: E0508 00:14:11.478817 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:11.483768 kubelet[2611]: E0508 00:14:11.483723 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:11.503166 kubelet[2611]: I0508 00:14:11.503093 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7gfvw" podStartSLOduration=20.503068561 podStartE2EDuration="20.503068561s" podCreationTimestamp="2025-05-08 00:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:14:11.498389118 +0000 UTC m=+26.226162955" watchObservedRunningTime="2025-05-08 00:14:11.503068561 +0000 UTC m=+26.230842398" May 8 00:14:11.514563 kubelet[2611]: I0508 00:14:11.514464 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vghmh" podStartSLOduration=20.514446146 podStartE2EDuration="20.514446146s" podCreationTimestamp="2025-05-08 00:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:14:11.514014756 +0000 UTC m=+26.241788593" watchObservedRunningTime="2025-05-08 00:14:11.514446146 +0000 UTC m=+26.242219983" May 8 00:14:12.486028 kubelet[2611]: E0508 00:14:12.485978 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:12.486702 kubelet[2611]: E0508 00:14:12.486677 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:13.492453 kubelet[2611]: E0508 00:14:13.488935 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:13.494016 kubelet[2611]: E0508 00:14:13.492913 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:14:17.711437 kubelet[2611]: E0508 00:14:17.711381 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:14:23.897570 kubelet[2611]: E0508 00:14:23.897525 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:14:24.719662 kubelet[2611]: E0508 00:14:24.719634 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:14:26.238335 kubelet[2611]: E0508 00:14:26.238295 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:14:31.182870 kubelet[2611]: E0508 00:14:31.182826 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:14:35.164840 kubelet[2611]: E0508 00:14:35.164801 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:14:47.712271 kubelet[2611]: E0508 00:14:47.712178 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:14:53.898564 kubelet[2611]: E0508 00:14:53.898517 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:14:54.719797 kubelet[2611]: E0508 00:14:54.719769 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:14:56.240098 kubelet[2611]: E0508 00:14:56.239857 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:15:00.371588 kubelet[2611]: E0508 00:15:00.371556 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:15:01.183116 kubelet[2611]: E0508 00:15:01.183071 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:15:05.164578 kubelet[2611]: E0508 00:15:05.164103 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:15:10.371919 kubelet[2611]: E0508 00:15:10.371889 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:15:15.373047 kubelet[2611]: E0508 00:15:15.372705 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:15:17.712059 kubelet[2611]: E0508 00:15:17.712029 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:15:21.373214 kubelet[2611]: E0508 00:15:21.371630 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:15:23.898375 kubelet[2611]: E0508 00:15:23.898341 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:15:24.719539 kubelet[2611]: E0508 00:15:24.719500 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:15:25.373227 kubelet[2611]: E0508 00:15:25.372796 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:15:26.239336 kubelet[2611]: E0508 00:15:26.239279 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:15:26.373009 kubelet[2611]: E0508 00:15:26.372562 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:15:30.371844 kubelet[2611]: E0508 00:15:30.371744 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:15:31.180613 kubelet[2611]: E0508 00:15:31.180573 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:15:34.372139 kubelet[2611]: E0508 00:15:34.372101 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:15:35.164998 kubelet[2611]: E0508 00:15:35.164966 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:15:42.868845 systemd[1]: Started sshd@7-172.232.9.214:22-139.178.89.65:51040.service - OpenSSH per-connection server daemon (139.178.89.65:51040). May 8 00:15:43.197279 sshd[4007]: Accepted publickey for core from 139.178.89.65 port 51040 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:15:43.198957 sshd-session[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:43.205108 systemd-logind[1455]: New session 8 of user core. May 8 00:15:43.214736 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:15:43.515876 sshd[4009]: Connection closed by 139.178.89.65 port 51040 May 8 00:15:43.516818 sshd-session[4007]: pam_unix(sshd:session): session closed for user core May 8 00:15:43.521130 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. May 8 00:15:43.521893 systemd[1]: sshd@7-172.232.9.214:22-139.178.89.65:51040.service: Deactivated successfully. May 8 00:15:43.524074 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:15:43.525497 systemd-logind[1455]: Removed session 8. May 8 00:15:45.865846 update_engine[1459]: I20250508 00:15:45.865756 1459 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 8 00:15:45.865846 update_engine[1459]: I20250508 00:15:45.865816 1459 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 8 00:15:45.866358 update_engine[1459]: I20250508 00:15:45.866024 1459 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 8 00:15:45.866806 update_engine[1459]: I20250508 00:15:45.866779 1459 omaha_request_params.cc:62] Current group set to beta May 8 00:15:45.867007 update_engine[1459]: I20250508 00:15:45.866885 1459 update_attempter.cc:499] Already updated boot flags. Skipping. May 8 00:15:45.867007 update_engine[1459]: I20250508 00:15:45.866899 1459 update_attempter.cc:643] Scheduling an action processor start. May 8 00:15:45.867007 update_engine[1459]: I20250508 00:15:45.866914 1459 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 00:15:45.867007 update_engine[1459]: I20250508 00:15:45.866940 1459 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 8 00:15:45.867007 update_engine[1459]: I20250508 00:15:45.866993 1459 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 00:15:45.867007 update_engine[1459]: I20250508 00:15:45.867003 1459 omaha_request_action.cc:272] Request: May 8 00:15:45.867007 update_engine[1459]: May 8 00:15:45.867007 update_engine[1459]: May 8 00:15:45.867007 update_engine[1459]: May 8 00:15:45.867007 update_engine[1459]: May 8 00:15:45.867007 update_engine[1459]: May 8 00:15:45.867007 update_engine[1459]: May 8 00:15:45.867007 update_engine[1459]: May 8 00:15:45.867007 update_engine[1459]: May 8 00:15:45.867299 update_engine[1459]: I20250508 00:15:45.867010 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:15:45.867926 locksmithd[1502]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 8 00:15:45.868191 update_engine[1459]: I20250508 00:15:45.868115 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:15:45.868554 update_engine[1459]: I20250508 00:15:45.868409 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:15:45.922908 update_engine[1459]: E20250508 00:15:45.922860 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:15:45.922959 update_engine[1459]: I20250508 00:15:45.922940 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 8 00:15:46.628791 systemd[1]: Started sshd@8-172.232.9.214:22-186.233.208.13:47052.service - OpenSSH per-connection server daemon (186.233.208.13:47052). May 8 00:15:47.673001 sshd[4024]: Invalid user spider from 186.233.208.13 port 47052 May 8 00:15:47.711275 kubelet[2611]: E0508 00:15:47.711250 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:15:47.864510 sshd[4024]: Received disconnect from 186.233.208.13 port 47052:11: Bye Bye [preauth] May 8 00:15:47.864510 sshd[4024]: Disconnected from invalid user spider 186.233.208.13 port 47052 [preauth] May 8 00:15:47.867006 systemd[1]: sshd@8-172.232.9.214:22-186.233.208.13:47052.service: Deactivated successfully. May 8 00:15:48.589850 systemd[1]: Started sshd@9-172.232.9.214:22-139.178.89.65:38202.service - OpenSSH per-connection server daemon (139.178.89.65:38202). May 8 00:15:48.916761 sshd[4029]: Accepted publickey for core from 139.178.89.65 port 38202 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:15:48.919009 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:48.924470 systemd-logind[1455]: New session 9 of user core. May 8 00:15:48.929739 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:15:49.230777 sshd[4031]: Connection closed by 139.178.89.65 port 38202 May 8 00:15:49.231941 sshd-session[4029]: pam_unix(sshd:session): session closed for user core May 8 00:15:49.236872 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. May 8 00:15:49.237913 systemd[1]: sshd@9-172.232.9.214:22-139.178.89.65:38202.service: Deactivated successfully. May 8 00:15:49.241365 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:15:49.242894 systemd-logind[1455]: Removed session 9. May 8 00:15:53.898179 kubelet[2611]: E0508 00:15:53.898111 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:15:54.291787 systemd[1]: Started sshd@10-172.232.9.214:22-139.178.89.65:38216.service - OpenSSH per-connection server daemon (139.178.89.65:38216). May 8 00:15:54.625906 sshd[4045]: Accepted publickey for core from 139.178.89.65 port 38216 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:15:54.628040 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:54.633029 systemd-logind[1455]: New session 10 of user core. May 8 00:15:54.635766 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:15:54.719574 kubelet[2611]: E0508 00:15:54.719538 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:15:54.921474 sshd[4047]: Connection closed by 139.178.89.65 port 38216 May 8 00:15:54.922251 sshd-session[4045]: pam_unix(sshd:session): session closed for user core May 8 00:15:54.926531 systemd[1]: sshd@10-172.232.9.214:22-139.178.89.65:38216.service: Deactivated successfully. May 8 00:15:54.929244 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:15:54.930322 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. May 8 00:15:54.931699 systemd-logind[1455]: Removed session 10. May 8 00:15:54.996950 systemd[1]: Started sshd@11-172.232.9.214:22-139.178.89.65:38222.service - OpenSSH per-connection server daemon (139.178.89.65:38222). May 8 00:15:55.328007 sshd[4059]: Accepted publickey for core from 139.178.89.65 port 38222 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:15:55.329368 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:55.335278 systemd-logind[1455]: New session 11 of user core. May 8 00:15:55.343856 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:15:55.666616 sshd[4061]: Connection closed by 139.178.89.65 port 38222 May 8 00:15:55.667621 sshd-session[4059]: pam_unix(sshd:session): session closed for user core May 8 00:15:55.673055 systemd[1]: sshd@11-172.232.9.214:22-139.178.89.65:38222.service: Deactivated successfully. May 8 00:15:55.675203 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:15:55.676439 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. May 8 00:15:55.677531 systemd-logind[1455]: Removed session 11. May 8 00:15:55.737746 systemd[1]: Started sshd@12-172.232.9.214:22-139.178.89.65:38228.service - OpenSSH per-connection server daemon (139.178.89.65:38228). May 8 00:15:55.864459 update_engine[1459]: I20250508 00:15:55.864365 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:15:55.865026 update_engine[1459]: I20250508 00:15:55.864707 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:15:55.865026 update_engine[1459]: I20250508 00:15:55.864975 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:15:55.889971 update_engine[1459]: E20250508 00:15:55.889874 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:15:55.890055 update_engine[1459]: I20250508 00:15:55.890017 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 8 00:15:56.076539 sshd[4071]: Accepted publickey for core from 139.178.89.65 port 38228 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:15:56.078094 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:15:56.087389 systemd-logind[1455]: New session 12 of user core. May 8 00:15:56.090752 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:15:56.240505 kubelet[2611]: E0508 00:15:56.240433 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:15:56.390023 sshd[4073]: Connection closed by 139.178.89.65 port 38228 May 8 00:15:56.390966 sshd-session[4071]: pam_unix(sshd:session): session closed for user core May 8 00:15:56.397017 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. May 8 00:15:56.398302 systemd[1]: sshd@12-172.232.9.214:22-139.178.89.65:38228.service: Deactivated successfully. May 8 00:15:56.401473 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:15:56.402523 systemd-logind[1455]: Removed session 12. May 8 00:16:01.179659 kubelet[2611]: E0508 00:16:01.179620 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:16:01.458165 systemd[1]: Started sshd@13-172.232.9.214:22-139.178.89.65:45646.service - OpenSSH per-connection server daemon (139.178.89.65:45646). May 8 00:16:01.785960 sshd[4085]: Accepted publickey for core from 139.178.89.65 port 45646 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:16:01.787319 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:01.791219 systemd-logind[1455]: New session 13 of user core. May 8 00:16:01.798722 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:16:02.079973 sshd[4087]: Connection closed by 139.178.89.65 port 45646 May 8 00:16:02.080857 sshd-session[4085]: pam_unix(sshd:session): session closed for user core May 8 00:16:02.084946 systemd[1]: sshd@13-172.232.9.214:22-139.178.89.65:45646.service: Deactivated successfully. May 8 00:16:02.086789 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:16:02.087464 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. May 8 00:16:02.088401 systemd-logind[1455]: Removed session 13. May 8 00:16:03.372433 kubelet[2611]: E0508 00:16:03.371839 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:05.163652 kubelet[2611]: E0508 00:16:05.163622 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:16:05.866734 update_engine[1459]: I20250508 00:16:05.866662 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:16:05.867446 update_engine[1459]: I20250508 00:16:05.866935 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:16:05.867446 update_engine[1459]: I20250508 00:16:05.867350 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:16:05.868072 update_engine[1459]: E20250508 00:16:05.868038 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:16:05.868126 update_engine[1459]: I20250508 00:16:05.868103 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 8 00:16:07.142828 systemd[1]: Started sshd@14-172.232.9.214:22-139.178.89.65:50584.service - OpenSSH per-connection server daemon (139.178.89.65:50584). May 8 00:16:07.473619 sshd[4099]: Accepted publickey for core from 139.178.89.65 port 50584 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:16:07.474951 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:07.480270 systemd-logind[1455]: New session 14 of user core. May 8 00:16:07.485720 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:16:07.774239 sshd[4101]: Connection closed by 139.178.89.65 port 50584 May 8 00:16:07.775229 sshd-session[4099]: pam_unix(sshd:session): session closed for user core May 8 00:16:07.779048 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. May 8 00:16:07.779997 systemd[1]: sshd@14-172.232.9.214:22-139.178.89.65:50584.service: Deactivated successfully. May 8 00:16:07.782630 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:16:07.784099 systemd-logind[1455]: Removed session 14. May 8 00:16:07.843022 systemd[1]: Started sshd@15-172.232.9.214:22-139.178.89.65:50588.service - OpenSSH per-connection server daemon (139.178.89.65:50588). May 8 00:16:08.173009 sshd[4113]: Accepted publickey for core from 139.178.89.65 port 50588 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:16:08.174470 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:08.179104 systemd-logind[1455]: New session 15 of user core. May 8 00:16:08.182731 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:16:08.503286 sshd[4115]: Connection closed by 139.178.89.65 port 50588 May 8 00:16:08.503996 sshd-session[4113]: pam_unix(sshd:session): session closed for user core May 8 00:16:08.509089 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. May 8 00:16:08.510163 systemd[1]: sshd@15-172.232.9.214:22-139.178.89.65:50588.service: Deactivated successfully. May 8 00:16:08.512382 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:16:08.513271 systemd-logind[1455]: Removed session 15. May 8 00:16:08.573978 systemd[1]: Started sshd@16-172.232.9.214:22-139.178.89.65:50602.service - OpenSSH per-connection server daemon (139.178.89.65:50602). May 8 00:16:08.918222 sshd[4125]: Accepted publickey for core from 139.178.89.65 port 50602 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:16:08.919753 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:08.924714 systemd-logind[1455]: New session 16 of user core. May 8 00:16:08.928735 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:16:09.721145 sshd[4127]: Connection closed by 139.178.89.65 port 50602 May 8 00:16:09.722585 sshd-session[4125]: pam_unix(sshd:session): session closed for user core May 8 00:16:09.726045 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. May 8 00:16:09.726837 systemd[1]: sshd@16-172.232.9.214:22-139.178.89.65:50602.service: Deactivated successfully. May 8 00:16:09.730314 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:16:09.731178 systemd-logind[1455]: Removed session 16. May 8 00:16:09.784787 systemd[1]: Started sshd@17-172.232.9.214:22-139.178.89.65:50608.service - OpenSSH per-connection server daemon (139.178.89.65:50608). May 8 00:16:10.114135 sshd[4145]: Accepted publickey for core from 139.178.89.65 port 50608 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:16:10.116717 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:10.123658 systemd-logind[1455]: New session 17 of user core. May 8 00:16:10.132835 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:16:10.550216 sshd[4147]: Connection closed by 139.178.89.65 port 50608 May 8 00:16:10.551196 sshd-session[4145]: pam_unix(sshd:session): session closed for user core May 8 00:16:10.555879 systemd[1]: sshd@17-172.232.9.214:22-139.178.89.65:50608.service: Deactivated successfully. May 8 00:16:10.560069 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:16:10.562155 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. May 8 00:16:10.563686 systemd-logind[1455]: Removed session 17. May 8 00:16:10.618926 systemd[1]: Started sshd@18-172.232.9.214:22-139.178.89.65:50614.service - OpenSSH per-connection server daemon (139.178.89.65:50614). May 8 00:16:10.950547 sshd[4157]: Accepted publickey for core from 139.178.89.65 port 50614 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:16:10.951222 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:10.956936 systemd-logind[1455]: New session 18 of user core. May 8 00:16:10.965837 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:16:11.251699 sshd[4159]: Connection closed by 139.178.89.65 port 50614 May 8 00:16:11.252530 sshd-session[4157]: pam_unix(sshd:session): session closed for user core May 8 00:16:11.256429 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. May 8 00:16:11.257183 systemd[1]: sshd@18-172.232.9.214:22-139.178.89.65:50614.service: Deactivated successfully. May 8 00:16:11.259884 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:16:11.260764 systemd-logind[1455]: Removed session 18. May 8 00:16:15.866890 update_engine[1459]: I20250508 00:16:15.866779 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:16:15.867384 update_engine[1459]: I20250508 00:16:15.867183 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:16:15.867632 update_engine[1459]: I20250508 00:16:15.867566 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:16:15.868817 update_engine[1459]: E20250508 00:16:15.868738 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:16:15.868817 update_engine[1459]: I20250508 00:16:15.868832 1459 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 00:16:15.868817 update_engine[1459]: I20250508 00:16:15.868844 1459 omaha_request_action.cc:617] Omaha request response: May 8 00:16:15.869138 update_engine[1459]: E20250508 00:16:15.868935 1459 omaha_request_action.cc:636] Omaha request network transfer failed. May 8 00:16:15.869138 update_engine[1459]: I20250508 00:16:15.868978 1459 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 8 00:16:15.869138 update_engine[1459]: I20250508 00:16:15.868985 1459 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:16:15.869138 update_engine[1459]: I20250508 00:16:15.868991 1459 update_attempter.cc:306] Processing Done. May 8 00:16:15.869138 update_engine[1459]: E20250508 00:16:15.869007 1459 update_attempter.cc:619] Update failed. May 8 00:16:15.869138 update_engine[1459]: I20250508 00:16:15.869014 1459 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 8 00:16:15.869138 update_engine[1459]: I20250508 00:16:15.869021 1459 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 8 00:16:15.869138 update_engine[1459]: I20250508 00:16:15.869028 1459 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 8 00:16:15.869138 update_engine[1459]: I20250508 00:16:15.869122 1459 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 00:16:15.869349 update_engine[1459]: I20250508 00:16:15.869158 1459 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 00:16:15.869349 update_engine[1459]: I20250508 00:16:15.869170 1459 omaha_request_action.cc:272] Request: May 8 00:16:15.869349 update_engine[1459]: May 8 00:16:15.869349 update_engine[1459]: May 8 00:16:15.869349 update_engine[1459]: May 8 00:16:15.869349 update_engine[1459]: May 8 00:16:15.869349 update_engine[1459]: May 8 00:16:15.869349 update_engine[1459]: May 8 00:16:15.869349 update_engine[1459]: I20250508 00:16:15.869182 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:16:15.869551 update_engine[1459]: I20250508 00:16:15.869385 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:16:15.869737 update_engine[1459]: I20250508 00:16:15.869591 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:16:15.870071 locksmithd[1502]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 8 00:16:15.870840 update_engine[1459]: E20250508 00:16:15.870745 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:16:15.870914 update_engine[1459]: I20250508 00:16:15.870889 1459 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 00:16:15.870914 update_engine[1459]: I20250508 00:16:15.870908 1459 omaha_request_action.cc:617] Omaha request response: May 8 00:16:15.870956 update_engine[1459]: I20250508 00:16:15.870918 1459 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:16:15.870956 update_engine[1459]: I20250508 00:16:15.870927 1459 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:16:15.870956 update_engine[1459]: I20250508 00:16:15.870934 1459 update_attempter.cc:306] Processing Done. May 8 00:16:15.870956 update_engine[1459]: I20250508 00:16:15.870944 1459 update_attempter.cc:310] Error event sent. May 8 00:16:15.871037 update_engine[1459]: I20250508 00:16:15.870980 1459 update_check_scheduler.cc:74] Next update check in 40m22s May 8 00:16:15.871656 locksmithd[1502]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 8 00:16:16.323825 systemd[1]: Started sshd@19-172.232.9.214:22-139.178.89.65:50616.service - OpenSSH per-connection server daemon (139.178.89.65:50616). May 8 00:16:16.657468 sshd[4173]: Accepted publickey for core from 139.178.89.65 port 50616 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:16:16.659160 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:16.664766 systemd-logind[1455]: New session 19 of user core. May 8 00:16:16.669711 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:16:16.974460 sshd[4175]: Connection closed by 139.178.89.65 port 50616 May 8 00:16:16.975320 sshd-session[4173]: pam_unix(sshd:session): session closed for user core May 8 00:16:16.979665 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. May 8 00:16:16.980416 systemd[1]: sshd@19-172.232.9.214:22-139.178.89.65:50616.service: Deactivated successfully. May 8 00:16:16.982389 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:16:16.983503 systemd-logind[1455]: Removed session 19. May 8 00:16:17.711479 kubelet[2611]: E0508 00:16:17.711200 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:16:22.036819 systemd[1]: Started sshd@20-172.232.9.214:22-139.178.89.65:34802.service - OpenSSH per-connection server daemon (139.178.89.65:34802). May 8 00:16:22.366587 sshd[4189]: Accepted publickey for core from 139.178.89.65 port 34802 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:16:22.368580 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:22.372661 systemd-logind[1455]: New session 20 of user core. May 8 00:16:22.382990 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:16:22.667313 sshd[4191]: Connection closed by 139.178.89.65 port 34802 May 8 00:16:22.668079 sshd-session[4189]: pam_unix(sshd:session): session closed for user core May 8 00:16:22.673171 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. May 8 00:16:22.674503 systemd[1]: sshd@20-172.232.9.214:22-139.178.89.65:34802.service: Deactivated successfully. May 8 00:16:22.678085 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:16:22.680068 systemd-logind[1455]: Removed session 20. May 8 00:16:23.897961 kubelet[2611]: E0508 00:16:23.897921 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:16:24.719013 kubelet[2611]: E0508 00:16:24.718979 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:16:26.237857 kubelet[2611]: E0508 00:16:26.237809 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:16:27.372636 kubelet[2611]: E0508 00:16:27.372532 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:27.732797 systemd[1]: Started sshd@21-172.232.9.214:22-139.178.89.65:58116.service - OpenSSH per-connection server daemon (139.178.89.65:58116). May 8 00:16:28.063256 sshd[4203]: Accepted publickey for core from 139.178.89.65 port 58116 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:16:28.064039 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:28.069656 systemd-logind[1455]: New session 21 of user core. May 8 00:16:28.079795 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:16:28.366576 sshd[4205]: Connection closed by 139.178.89.65 port 58116 May 8 00:16:28.368202 sshd-session[4203]: pam_unix(sshd:session): session closed for user core May 8 00:16:28.372933 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. May 8 00:16:28.373786 systemd[1]: sshd@21-172.232.9.214:22-139.178.89.65:58116.service: Deactivated successfully. May 8 00:16:28.376303 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:16:28.377826 systemd-logind[1455]: Removed session 21. May 8 00:16:28.436892 systemd[1]: Started sshd@22-172.232.9.214:22-139.178.89.65:58132.service - OpenSSH per-connection server daemon (139.178.89.65:58132). May 8 00:16:28.782296 sshd[4217]: Accepted publickey for core from 139.178.89.65 port 58132 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:16:28.783459 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:28.788527 systemd-logind[1455]: New session 22 of user core. May 8 00:16:28.793783 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:16:29.374233 kubelet[2611]: E0508 00:16:29.374181 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:29.375730 kubelet[2611]: E0508 00:16:29.374784 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:30.287672 containerd[1469]: time="2025-05-08T00:16:30.286216378Z" level=info msg="StopContainer for \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\" with timeout 30 (s)" May 8 00:16:30.292665 systemd[1]: run-containerd-runc-k8s.io-0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe-runc.EjgEN0.mount: Deactivated successfully. May 8 00:16:30.294286 containerd[1469]: time="2025-05-08T00:16:30.294246452Z" level=info msg="Stop container \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\" with signal terminated" May 8 00:16:30.324254 containerd[1469]: time="2025-05-08T00:16:30.324181507Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:16:30.332555 containerd[1469]: time="2025-05-08T00:16:30.332520170Z" level=info msg="StopContainer for \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\" with timeout 2 (s)" May 8 00:16:30.334088 containerd[1469]: time="2025-05-08T00:16:30.334060976Z" level=info msg="Stop container \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\" with signal terminated" May 8 00:16:30.340374 systemd[1]: cri-containerd-c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816.scope: Deactivated successfully. May 8 00:16:30.351573 systemd-networkd[1388]: lxc_health: Link DOWN May 8 00:16:30.351582 systemd-networkd[1388]: lxc_health: Lost carrier May 8 00:16:30.375575 systemd[1]: cri-containerd-0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe.scope: Deactivated successfully. May 8 00:16:30.376514 systemd[1]: cri-containerd-0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe.scope: Consumed 7.059s CPU time, 127.4M memory peak, 136K read from disk, 13.3M written to disk. May 8 00:16:30.415809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816-rootfs.mount: Deactivated successfully. May 8 00:16:30.427091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe-rootfs.mount: Deactivated successfully. May 8 00:16:30.441955 containerd[1469]: time="2025-05-08T00:16:30.440202047Z" level=info msg="shim disconnected" id=c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816 namespace=k8s.io May 8 00:16:30.441955 containerd[1469]: time="2025-05-08T00:16:30.440270897Z" level=warning msg="cleaning up after shim disconnected" id=c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816 namespace=k8s.io May 8 00:16:30.441955 containerd[1469]: time="2025-05-08T00:16:30.440281027Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:16:30.442588 containerd[1469]: time="2025-05-08T00:16:30.442521610Z" level=info msg="shim disconnected" id=0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe namespace=k8s.io May 8 00:16:30.442588 containerd[1469]: time="2025-05-08T00:16:30.442587940Z" level=warning msg="cleaning up after shim disconnected" id=0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe namespace=k8s.io May 8 00:16:30.442588 containerd[1469]: time="2025-05-08T00:16:30.442626339Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:16:30.463144 containerd[1469]: time="2025-05-08T00:16:30.463076724Z" level=info msg="StopContainer for \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\" returns successfully" May 8 00:16:30.463988 containerd[1469]: time="2025-05-08T00:16:30.463770813Z" level=info msg="StopPodSandbox for \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\"" May 8 00:16:30.463988 containerd[1469]: time="2025-05-08T00:16:30.463812852Z" level=info msg="Container to stop \"3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:16:30.463988 containerd[1469]: time="2025-05-08T00:16:30.463847852Z" level=info msg="Container to stop \"4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:16:30.463988 containerd[1469]: time="2025-05-08T00:16:30.463857832Z" level=info msg="Container to stop \"530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:16:30.463988 containerd[1469]: time="2025-05-08T00:16:30.463866962Z" level=info msg="Container to stop \"6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:16:30.463988 containerd[1469]: time="2025-05-08T00:16:30.463875492Z" level=info msg="Container to stop \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:16:30.466079 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407-shm.mount: Deactivated successfully. May 8 00:16:30.469116 containerd[1469]: time="2025-05-08T00:16:30.469015905Z" level=info msg="StopContainer for \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\" returns successfully" May 8 00:16:30.469963 containerd[1469]: time="2025-05-08T00:16:30.469895293Z" level=info msg="StopPodSandbox for \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\"" May 8 00:16:30.470023 containerd[1469]: time="2025-05-08T00:16:30.469955583Z" level=info msg="Container to stop \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:16:30.477394 systemd[1]: cri-containerd-1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05.scope: Deactivated successfully. May 8 00:16:30.482062 systemd[1]: cri-containerd-6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407.scope: Deactivated successfully. May 8 00:16:30.494533 kubelet[2611]: E0508 00:16:30.494483 2611 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:16:30.517651 containerd[1469]: time="2025-05-08T00:16:30.517580771Z" level=info msg="shim disconnected" id=1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05 namespace=k8s.io May 8 00:16:30.517651 containerd[1469]: time="2025-05-08T00:16:30.517647051Z" level=warning msg="cleaning up after shim disconnected" id=1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05 namespace=k8s.io May 8 00:16:30.517960 containerd[1469]: time="2025-05-08T00:16:30.517658021Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:16:30.524459 containerd[1469]: time="2025-05-08T00:16:30.524420149Z" level=info msg="shim disconnected" id=6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407 namespace=k8s.io May 8 00:16:30.524644 containerd[1469]: time="2025-05-08T00:16:30.524459189Z" level=warning msg="cleaning up after shim disconnected" id=6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407 namespace=k8s.io May 8 00:16:30.524644 containerd[1469]: time="2025-05-08T00:16:30.524468289Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:16:30.539936 containerd[1469]: time="2025-05-08T00:16:30.539335391Z" level=info msg="TearDown network for sandbox \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\" successfully" May 8 00:16:30.539936 containerd[1469]: time="2025-05-08T00:16:30.539360882Z" level=info msg="StopPodSandbox for \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\" returns successfully" May 8 00:16:30.544615 containerd[1469]: time="2025-05-08T00:16:30.544567555Z" level=info msg="TearDown network for sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" successfully" May 8 00:16:30.544713 containerd[1469]: time="2025-05-08T00:16:30.544693274Z" level=info msg="StopPodSandbox for \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" returns successfully" May 8 00:16:30.627302 kubelet[2611]: I0508 00:16:30.627269 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:16:30.627467 kubelet[2611]: I0508 00:16:30.627325 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-lib-modules\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627467 kubelet[2611]: I0508 00:16:30.627349 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9dc8683-e723-4c18-836e-51cdf78442d6-cilium-config-path\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627467 kubelet[2611]: I0508 00:16:30.627371 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-xtables-lock\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627467 kubelet[2611]: I0508 00:16:30.627393 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb62ec72-25d7-41e5-9058-ae1e3a53b2e7-cilium-config-path\") pod \"eb62ec72-25d7-41e5-9058-ae1e3a53b2e7\" (UID: \"eb62ec72-25d7-41e5-9058-ae1e3a53b2e7\") " May 8 00:16:30.627467 kubelet[2611]: I0508 00:16:30.627408 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-cilium-run\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627467 kubelet[2611]: I0508 00:16:30.627424 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-hostproc\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627672 kubelet[2611]: I0508 00:16:30.627442 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sxc6\" (UniqueName: \"kubernetes.io/projected/e9dc8683-e723-4c18-836e-51cdf78442d6-kube-api-access-5sxc6\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627672 kubelet[2611]: I0508 00:16:30.627458 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-etc-cni-netd\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627672 kubelet[2611]: I0508 00:16:30.627475 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-host-proc-sys-kernel\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627672 kubelet[2611]: I0508 00:16:30.627492 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn47x\" (UniqueName: \"kubernetes.io/projected/eb62ec72-25d7-41e5-9058-ae1e3a53b2e7-kube-api-access-kn47x\") pod \"eb62ec72-25d7-41e5-9058-ae1e3a53b2e7\" (UID: \"eb62ec72-25d7-41e5-9058-ae1e3a53b2e7\") " May 8 00:16:30.627672 kubelet[2611]: I0508 00:16:30.627510 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9dc8683-e723-4c18-836e-51cdf78442d6-hubble-tls\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627672 kubelet[2611]: I0508 00:16:30.627524 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-cilium-cgroup\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627878 kubelet[2611]: I0508 00:16:30.627539 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-host-proc-sys-net\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627878 kubelet[2611]: I0508 00:16:30.627557 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9dc8683-e723-4c18-836e-51cdf78442d6-clustermesh-secrets\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627878 kubelet[2611]: I0508 00:16:30.627572 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-bpf-maps\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627878 kubelet[2611]: I0508 00:16:30.627589 2611 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-cni-path\") pod \"e9dc8683-e723-4c18-836e-51cdf78442d6\" (UID: \"e9dc8683-e723-4c18-836e-51cdf78442d6\") " May 8 00:16:30.627878 kubelet[2611]: I0508 00:16:30.627652 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-cni-path" (OuterVolumeSpecName: "cni-path") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:16:30.630645 kubelet[2611]: I0508 00:16:30.629653 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:16:30.630645 kubelet[2611]: I0508 00:16:30.629685 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:16:30.630725 kubelet[2611]: I0508 00:16:30.630678 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9dc8683-e723-4c18-836e-51cdf78442d6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:16:30.633259 kubelet[2611]: I0508 00:16:30.633231 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb62ec72-25d7-41e5-9058-ae1e3a53b2e7-kube-api-access-kn47x" (OuterVolumeSpecName: "kube-api-access-kn47x") pod "eb62ec72-25d7-41e5-9058-ae1e3a53b2e7" (UID: "eb62ec72-25d7-41e5-9058-ae1e3a53b2e7"). InnerVolumeSpecName "kube-api-access-kn47x". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:16:30.633322 kubelet[2611]: I0508 00:16:30.633244 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb62ec72-25d7-41e5-9058-ae1e3a53b2e7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eb62ec72-25d7-41e5-9058-ae1e3a53b2e7" (UID: "eb62ec72-25d7-41e5-9058-ae1e3a53b2e7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:16:30.633387 kubelet[2611]: I0508 00:16:30.633374 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:16:30.633447 kubelet[2611]: I0508 00:16:30.633435 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-hostproc" (OuterVolumeSpecName: "hostproc") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:16:30.635568 kubelet[2611]: I0508 00:16:30.635543 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9dc8683-e723-4c18-836e-51cdf78442d6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:16:30.635641 kubelet[2611]: I0508 00:16:30.635576 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:16:30.635726 kubelet[2611]: I0508 00:16:30.635698 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:16:30.636094 kubelet[2611]: I0508 00:16:30.636075 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9dc8683-e723-4c18-836e-51cdf78442d6-kube-api-access-5sxc6" (OuterVolumeSpecName: "kube-api-access-5sxc6") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "kube-api-access-5sxc6". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:16:30.636169 kubelet[2611]: I0508 00:16:30.636155 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:16:30.636241 kubelet[2611]: I0508 00:16:30.636228 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:16:30.637853 kubelet[2611]: I0508 00:16:30.637820 2611 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9dc8683-e723-4c18-836e-51cdf78442d6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e9dc8683-e723-4c18-836e-51cdf78442d6" (UID: "e9dc8683-e723-4c18-836e-51cdf78442d6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:16:30.727843 kubelet[2611]: I0508 00:16:30.727770 2611 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb62ec72-25d7-41e5-9058-ae1e3a53b2e7-cilium-config-path\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.727843 kubelet[2611]: I0508 00:16:30.727804 2611 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-cilium-run\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.727843 kubelet[2611]: I0508 00:16:30.727815 2611 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-hostproc\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.727843 kubelet[2611]: I0508 00:16:30.727826 2611 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5sxc6\" (UniqueName: \"kubernetes.io/projected/e9dc8683-e723-4c18-836e-51cdf78442d6-kube-api-access-5sxc6\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.727843 kubelet[2611]: I0508 00:16:30.727837 2611 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-host-proc-sys-kernel\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.727843 kubelet[2611]: I0508 00:16:30.727847 2611 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kn47x\" (UniqueName: \"kubernetes.io/projected/eb62ec72-25d7-41e5-9058-ae1e3a53b2e7-kube-api-access-kn47x\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.727843 kubelet[2611]: I0508 00:16:30.727857 2611 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9dc8683-e723-4c18-836e-51cdf78442d6-hubble-tls\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.728129 kubelet[2611]: I0508 00:16:30.727866 2611 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-etc-cni-netd\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.728129 kubelet[2611]: I0508 00:16:30.727877 2611 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-host-proc-sys-net\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.728129 kubelet[2611]: I0508 00:16:30.727887 2611 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9dc8683-e723-4c18-836e-51cdf78442d6-clustermesh-secrets\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.728129 kubelet[2611]: I0508 00:16:30.727896 2611 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-bpf-maps\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.728129 kubelet[2611]: I0508 00:16:30.727907 2611 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-cni-path\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.728129 kubelet[2611]: I0508 00:16:30.727916 2611 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-cilium-cgroup\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.728129 kubelet[2611]: I0508 00:16:30.727927 2611 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-lib-modules\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.728129 kubelet[2611]: I0508 00:16:30.727937 2611 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9dc8683-e723-4c18-836e-51cdf78442d6-cilium-config-path\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.728326 kubelet[2611]: I0508 00:16:30.727946 2611 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9dc8683-e723-4c18-836e-51cdf78442d6-xtables-lock\") on node \"172-232-9-214\" DevicePath \"\"" May 8 00:16:30.731633 kubelet[2611]: I0508 00:16:30.730737 2611 scope.go:117] "RemoveContainer" containerID="c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816" May 8 00:16:30.735350 containerd[1469]: time="2025-05-08T00:16:30.735321527Z" level=info msg="RemoveContainer for \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\"" May 8 00:16:30.736286 systemd[1]: Removed slice kubepods-besteffort-podeb62ec72_25d7_41e5_9058_ae1e3a53b2e7.slice - libcontainer container kubepods-besteffort-podeb62ec72_25d7_41e5_9058_ae1e3a53b2e7.slice. May 8 00:16:30.740747 containerd[1469]: time="2025-05-08T00:16:30.740635200Z" level=info msg="RemoveContainer for \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\" returns successfully" May 8 00:16:30.741450 kubelet[2611]: I0508 00:16:30.741428 2611 scope.go:117] "RemoveContainer" containerID="c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816" May 8 00:16:30.742334 containerd[1469]: time="2025-05-08T00:16:30.742302065Z" level=error msg="ContainerStatus for \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\": not found" May 8 00:16:30.742581 kubelet[2611]: E0508 00:16:30.742555 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\": not found" containerID="c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816" May 8 00:16:30.743082 kubelet[2611]: I0508 00:16:30.742588 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816"} err="failed to get container status \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1cf48bc923f8f491dfcb0f84a1894a19618df99920a0cc69a66266864b4a816\": not found" May 8 00:16:30.743121 kubelet[2611]: I0508 00:16:30.743088 2611 scope.go:117] "RemoveContainer" containerID="0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe" May 8 00:16:30.744003 containerd[1469]: time="2025-05-08T00:16:30.743980220Z" level=info msg="RemoveContainer for \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\"" May 8 00:16:30.749162 containerd[1469]: time="2025-05-08T00:16:30.748711734Z" level=info msg="RemoveContainer for \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\" returns successfully" May 8 00:16:30.750076 systemd[1]: Removed slice kubepods-burstable-pode9dc8683_e723_4c18_836e_51cdf78442d6.slice - libcontainer container kubepods-burstable-pode9dc8683_e723_4c18_836e_51cdf78442d6.slice. May 8 00:16:30.750261 systemd[1]: kubepods-burstable-pode9dc8683_e723_4c18_836e_51cdf78442d6.slice: Consumed 7.158s CPU time, 127.8M memory peak, 136K read from disk, 13.3M written to disk. May 8 00:16:30.752086 kubelet[2611]: I0508 00:16:30.752067 2611 scope.go:117] "RemoveContainer" containerID="530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007" May 8 00:16:30.753055 containerd[1469]: time="2025-05-08T00:16:30.752941341Z" level=info msg="RemoveContainer for \"530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007\"" May 8 00:16:30.756432 containerd[1469]: time="2025-05-08T00:16:30.756258501Z" level=info msg="RemoveContainer for \"530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007\" returns successfully" May 8 00:16:30.756483 kubelet[2611]: I0508 00:16:30.756369 2611 scope.go:117] "RemoveContainer" containerID="4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6" May 8 00:16:30.757377 containerd[1469]: time="2025-05-08T00:16:30.757352217Z" level=info msg="RemoveContainer for \"4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6\"" May 8 00:16:30.760400 containerd[1469]: time="2025-05-08T00:16:30.759957499Z" level=info msg="RemoveContainer for \"4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6\" returns successfully" May 8 00:16:30.760850 kubelet[2611]: I0508 00:16:30.760783 2611 scope.go:117] "RemoveContainer" containerID="3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc" May 8 00:16:30.762007 containerd[1469]: time="2025-05-08T00:16:30.761987843Z" level=info msg="RemoveContainer for \"3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc\"" May 8 00:16:30.764925 containerd[1469]: time="2025-05-08T00:16:30.764905253Z" level=info msg="RemoveContainer for \"3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc\" returns successfully" May 8 00:16:30.765048 kubelet[2611]: I0508 00:16:30.765022 2611 scope.go:117] "RemoveContainer" containerID="6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5" May 8 00:16:30.765818 containerd[1469]: time="2025-05-08T00:16:30.765794241Z" level=info msg="RemoveContainer for \"6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5\"" May 8 00:16:30.767768 containerd[1469]: time="2025-05-08T00:16:30.767749664Z" level=info msg="RemoveContainer for \"6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5\" returns successfully" May 8 00:16:30.767934 kubelet[2611]: I0508 00:16:30.767879 2611 scope.go:117] "RemoveContainer" containerID="0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe" May 8 00:16:30.768024 containerd[1469]: time="2025-05-08T00:16:30.767997963Z" level=error msg="ContainerStatus for \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\": not found" May 8 00:16:30.768116 kubelet[2611]: E0508 00:16:30.768099 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\": not found" containerID="0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe" May 8 00:16:30.768159 kubelet[2611]: I0508 00:16:30.768122 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe"} err="failed to get container status \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ce2fceeae6fb4b6552b3d57a2ad5612d32ac8e451cf60b940421e8ebe068afe\": not found" May 8 00:16:30.768159 kubelet[2611]: I0508 00:16:30.768137 2611 scope.go:117] "RemoveContainer" containerID="530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007" May 8 00:16:30.768320 containerd[1469]: time="2025-05-08T00:16:30.768274342Z" level=error msg="ContainerStatus for \"530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007\": not found" May 8 00:16:30.768402 kubelet[2611]: E0508 00:16:30.768386 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007\": not found" containerID="530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007" May 8 00:16:30.768484 kubelet[2611]: I0508 00:16:30.768429 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007"} err="failed to get container status \"530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007\": rpc error: code = NotFound desc = an error occurred when try to find container \"530a36d5e99776fb8f230ba1e980348966109f5e89d5ae613231baa913d55007\": not found" May 8 00:16:30.768484 kubelet[2611]: I0508 00:16:30.768446 2611 scope.go:117] "RemoveContainer" containerID="4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6" May 8 00:16:30.768628 containerd[1469]: time="2025-05-08T00:16:30.768583911Z" level=error msg="ContainerStatus for \"4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6\": not found" May 8 00:16:30.768703 kubelet[2611]: E0508 00:16:30.768680 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6\": not found" containerID="4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6" May 8 00:16:30.768729 kubelet[2611]: I0508 00:16:30.768712 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6"} err="failed to get container status \"4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d6d45b88103d43be129b26431030e118a02a2d605457da9e544cf4ef3e571f6\": not found" May 8 00:16:30.768729 kubelet[2611]: I0508 00:16:30.768726 2611 scope.go:117] "RemoveContainer" containerID="3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc" May 8 00:16:30.768872 containerd[1469]: time="2025-05-08T00:16:30.768852160Z" level=error msg="ContainerStatus for \"3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc\": not found" May 8 00:16:30.768966 kubelet[2611]: E0508 00:16:30.768949 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc\": not found" containerID="3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc" May 8 00:16:30.769030 kubelet[2611]: I0508 00:16:30.768968 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc"} err="failed to get container status \"3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b2a1ba08899abd6db741328f9f4c222dd49751a6bb54dc0638a26e5ffeb46fc\": not found" May 8 00:16:30.769030 kubelet[2611]: I0508 00:16:30.769010 2611 scope.go:117] "RemoveContainer" containerID="6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5" May 8 00:16:30.769196 containerd[1469]: time="2025-05-08T00:16:30.769176049Z" level=error msg="ContainerStatus for \"6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5\": not found" May 8 00:16:30.769281 kubelet[2611]: E0508 00:16:30.769266 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5\": not found" containerID="6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5" May 8 00:16:30.769317 kubelet[2611]: I0508 00:16:30.769283 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5"} err="failed to get container status \"6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"6428993056039f5261d2791aa96b822362e0122be303f16f6cfd6c1988fcd3f5\": not found" May 8 00:16:31.178909 kubelet[2611]: E0508 00:16:31.178862 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:16:31.280692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407-rootfs.mount: Deactivated successfully. May 8 00:16:31.280803 systemd[1]: var-lib-kubelet-pods-e9dc8683\x2de723\x2d4c18\x2d836e\x2d51cdf78442d6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:16:31.280880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05-rootfs.mount: Deactivated successfully. May 8 00:16:31.280964 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05-shm.mount: Deactivated successfully. May 8 00:16:31.281041 systemd[1]: var-lib-kubelet-pods-eb62ec72\x2d25d7\x2d41e5\x2d9058\x2dae1e3a53b2e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkn47x.mount: Deactivated successfully. May 8 00:16:31.281122 systemd[1]: var-lib-kubelet-pods-e9dc8683\x2de723\x2d4c18\x2d836e\x2d51cdf78442d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5sxc6.mount: Deactivated successfully. May 8 00:16:31.281197 systemd[1]: var-lib-kubelet-pods-e9dc8683\x2de723\x2d4c18\x2d836e\x2d51cdf78442d6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:16:31.374472 kubelet[2611]: I0508 00:16:31.374430 2611 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9dc8683-e723-4c18-836e-51cdf78442d6" path="/var/lib/kubelet/pods/e9dc8683-e723-4c18-836e-51cdf78442d6/volumes" May 8 00:16:31.375347 kubelet[2611]: I0508 00:16:31.375324 2611 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb62ec72-25d7-41e5-9058-ae1e3a53b2e7" path="/var/lib/kubelet/pods/eb62ec72-25d7-41e5-9058-ae1e3a53b2e7/volumes" May 8 00:16:32.266391 sshd[4219]: Connection closed by 139.178.89.65 port 58132 May 8 00:16:32.267144 sshd-session[4217]: pam_unix(sshd:session): session closed for user core May 8 00:16:32.271655 systemd[1]: sshd@22-172.232.9.214:22-139.178.89.65:58132.service: Deactivated successfully. May 8 00:16:32.273997 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:16:32.274940 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. May 8 00:16:32.275945 systemd-logind[1455]: Removed session 22. May 8 00:16:32.338902 systemd[1]: Started sshd@23-172.232.9.214:22-139.178.89.65:58140.service - OpenSSH per-connection server daemon (139.178.89.65:58140). May 8 00:16:32.372107 kubelet[2611]: E0508 00:16:32.372067 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:32.676116 sshd[4377]: Accepted publickey for core from 139.178.89.65 port 58140 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:16:32.677496 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:32.682632 systemd-logind[1455]: New session 23 of user core. May 8 00:16:32.689789 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:16:33.357510 kubelet[2611]: I0508 00:16:33.356468 2611 memory_manager.go:355] "RemoveStaleState removing state" podUID="eb62ec72-25d7-41e5-9058-ae1e3a53b2e7" containerName="cilium-operator" May 8 00:16:33.357510 kubelet[2611]: I0508 00:16:33.356494 2611 memory_manager.go:355] "RemoveStaleState removing state" podUID="e9dc8683-e723-4c18-836e-51cdf78442d6" containerName="cilium-agent" May 8 00:16:33.364406 systemd[1]: Created slice kubepods-burstable-pod4e118814_44a8_483d_b156_24c97f6453b4.slice - libcontainer container kubepods-burstable-pod4e118814_44a8_483d_b156_24c97f6453b4.slice. May 8 00:16:33.369797 kubelet[2611]: W0508 00:16:33.369778 2611 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172-232-9-214" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-232-9-214' and this object May 8 00:16:33.369917 kubelet[2611]: E0508 00:16:33.369889 2611 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172-232-9-214\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-232-9-214' and this object" logger="UnhandledError" May 8 00:16:33.370033 kubelet[2611]: W0508 00:16:33.370020 2611 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172-232-9-214" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-232-9-214' and this object May 8 00:16:33.370096 kubelet[2611]: E0508 00:16:33.370083 2611 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:172-232-9-214\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-232-9-214' and this object" logger="UnhandledError" May 8 00:16:33.370168 kubelet[2611]: W0508 00:16:33.370158 2611 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172-232-9-214" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-232-9-214' and this object May 8 00:16:33.370242 kubelet[2611]: E0508 00:16:33.370219 2611 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:172-232-9-214\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-232-9-214' and this object" logger="UnhandledError" May 8 00:16:33.370305 kubelet[2611]: W0508 00:16:33.370294 2611 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172-232-9-214" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-232-9-214' and this object May 8 00:16:33.370392 kubelet[2611]: E0508 00:16:33.370378 2611 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:172-232-9-214\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-232-9-214' and this object" logger="UnhandledError" May 8 00:16:33.370434 kubelet[2611]: I0508 00:16:33.370350 2611 status_manager.go:890] "Failed to get status for pod" podUID="4e118814-44a8-483d-b156-24c97f6453b4" pod="kube-system/cilium-bkbz6" err="pods \"cilium-bkbz6\" is forbidden: User \"system:node:172-232-9-214\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-232-9-214' and this object" May 8 00:16:33.387684 sshd[4379]: Connection closed by 139.178.89.65 port 58140 May 8 00:16:33.389456 sshd-session[4377]: pam_unix(sshd:session): session closed for user core May 8 00:16:33.395965 systemd[1]: sshd@23-172.232.9.214:22-139.178.89.65:58140.service: Deactivated successfully. May 8 00:16:33.405144 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:16:33.406165 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. May 8 00:16:33.407967 systemd-logind[1455]: Removed session 23. May 8 00:16:33.446808 kubelet[2611]: I0508 00:16:33.446513 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e118814-44a8-483d-b156-24c97f6453b4-cni-path\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.446808 kubelet[2611]: I0508 00:16:33.446546 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e118814-44a8-483d-b156-24c97f6453b4-xtables-lock\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.446808 kubelet[2611]: I0508 00:16:33.446561 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4e118814-44a8-483d-b156-24c97f6453b4-cilium-ipsec-secrets\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.446808 kubelet[2611]: I0508 00:16:33.446577 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e118814-44a8-483d-b156-24c97f6453b4-cilium-run\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.446808 kubelet[2611]: I0508 00:16:33.446590 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e118814-44a8-483d-b156-24c97f6453b4-host-proc-sys-kernel\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.446808 kubelet[2611]: I0508 00:16:33.446628 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e118814-44a8-483d-b156-24c97f6453b4-cilium-cgroup\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.450293 kubelet[2611]: I0508 00:16:33.446639 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e118814-44a8-483d-b156-24c97f6453b4-host-proc-sys-net\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.450293 kubelet[2611]: I0508 00:16:33.446654 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e118814-44a8-483d-b156-24c97f6453b4-bpf-maps\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.450293 kubelet[2611]: I0508 00:16:33.446667 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e118814-44a8-483d-b156-24c97f6453b4-etc-cni-netd\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.450293 kubelet[2611]: I0508 00:16:33.446678 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e118814-44a8-483d-b156-24c97f6453b4-hubble-tls\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.450293 kubelet[2611]: I0508 00:16:33.446691 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e118814-44a8-483d-b156-24c97f6453b4-clustermesh-secrets\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.450293 kubelet[2611]: I0508 00:16:33.446702 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s49g\" (UniqueName: \"kubernetes.io/projected/4e118814-44a8-483d-b156-24c97f6453b4-kube-api-access-2s49g\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.449870 systemd[1]: Started sshd@24-172.232.9.214:22-139.178.89.65:58150.service - OpenSSH per-connection server daemon (139.178.89.65:58150). May 8 00:16:33.450474 kubelet[2611]: I0508 00:16:33.446714 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e118814-44a8-483d-b156-24c97f6453b4-hostproc\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.450474 kubelet[2611]: I0508 00:16:33.446725 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e118814-44a8-483d-b156-24c97f6453b4-lib-modules\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.450474 kubelet[2611]: I0508 00:16:33.446740 2611 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e118814-44a8-483d-b156-24c97f6453b4-cilium-config-path\") pod \"cilium-bkbz6\" (UID: \"4e118814-44a8-483d-b156-24c97f6453b4\") " pod="kube-system/cilium-bkbz6" May 8 00:16:33.775756 sshd[4390]: Accepted publickey for core from 139.178.89.65 port 58150 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:16:33.777869 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:33.782971 systemd-logind[1455]: New session 24 of user core. May 8 00:16:33.791730 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:16:34.013194 sshd[4393]: Connection closed by 139.178.89.65 port 58150 May 8 00:16:34.014839 sshd-session[4390]: pam_unix(sshd:session): session closed for user core May 8 00:16:34.021146 systemd[1]: sshd@24-172.232.9.214:22-139.178.89.65:58150.service: Deactivated successfully. May 8 00:16:34.023331 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:16:34.024163 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. May 8 00:16:34.025135 systemd-logind[1455]: Removed session 24. May 8 00:16:34.085317 systemd[1]: Started sshd@25-172.232.9.214:22-139.178.89.65:58158.service - OpenSSH per-connection server daemon (139.178.89.65:58158). May 8 00:16:34.427186 sshd[4400]: Accepted publickey for core from 139.178.89.65 port 58158 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:16:34.428937 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:16:34.433838 systemd-logind[1455]: New session 25 of user core. May 8 00:16:34.438731 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:16:34.548844 kubelet[2611]: E0508 00:16:34.548747 2611 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 8 00:16:34.548844 kubelet[2611]: E0508 00:16:34.548811 2611 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-bkbz6: failed to sync secret cache: timed out waiting for the condition May 8 00:16:34.549745 kubelet[2611]: E0508 00:16:34.548901 2611 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e118814-44a8-483d-b156-24c97f6453b4-hubble-tls podName:4e118814-44a8-483d-b156-24c97f6453b4 nodeName:}" failed. No retries permitted until 2025-05-08 00:16:35.048872311 +0000 UTC m=+169.776646148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/4e118814-44a8-483d-b156-24c97f6453b4-hubble-tls") pod "cilium-bkbz6" (UID: "4e118814-44a8-483d-b156-24c97f6453b4") : failed to sync secret cache: timed out waiting for the condition May 8 00:16:34.549745 kubelet[2611]: E0508 00:16:34.548764 2611 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 8 00:16:34.549745 kubelet[2611]: E0508 00:16:34.549293 2611 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4e118814-44a8-483d-b156-24c97f6453b4-cilium-config-path podName:4e118814-44a8-483d-b156-24c97f6453b4 nodeName:}" failed. No retries permitted until 2025-05-08 00:16:35.049280919 +0000 UTC m=+169.777054756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/4e118814-44a8-483d-b156-24c97f6453b4-cilium-config-path") pod "cilium-bkbz6" (UID: "4e118814-44a8-483d-b156-24c97f6453b4") : failed to sync configmap cache: timed out waiting for the condition May 8 00:16:34.549745 kubelet[2611]: E0508 00:16:34.548785 2611 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition May 8 00:16:34.550102 kubelet[2611]: E0508 00:16:34.549523 2611 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e118814-44a8-483d-b156-24c97f6453b4-cilium-ipsec-secrets podName:4e118814-44a8-483d-b156-24c97f6453b4 nodeName:}" failed. No retries permitted until 2025-05-08 00:16:35.049515888 +0000 UTC m=+169.777289725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/4e118814-44a8-483d-b156-24c97f6453b4-cilium-ipsec-secrets") pod "cilium-bkbz6" (UID: "4e118814-44a8-483d-b156-24c97f6453b4") : failed to sync secret cache: timed out waiting for the condition May 8 00:16:35.164328 kubelet[2611]: E0508 00:16:35.164279 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]" May 8 00:16:35.167717 kubelet[2611]: E0508 00:16:35.167678 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:35.168951 containerd[1469]: time="2025-05-08T00:16:35.168578009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bkbz6,Uid:4e118814-44a8-483d-b156-24c97f6453b4,Namespace:kube-system,Attempt:0,}" May 8 00:16:35.192934 containerd[1469]: time="2025-05-08T00:16:35.192846749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:16:35.193769 containerd[1469]: time="2025-05-08T00:16:35.193481907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:16:35.194112 containerd[1469]: time="2025-05-08T00:16:35.194022415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:16:35.194554 containerd[1469]: time="2025-05-08T00:16:35.194297794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:16:35.219827 systemd[1]: Started cri-containerd-efc943759dadd49d126e74a16fdf3cfa4e34a3589c78fd1fbe69d07b2f9be951.scope - libcontainer container efc943759dadd49d126e74a16fdf3cfa4e34a3589c78fd1fbe69d07b2f9be951. May 8 00:16:35.249751 containerd[1469]: time="2025-05-08T00:16:35.249663164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bkbz6,Uid:4e118814-44a8-483d-b156-24c97f6453b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"efc943759dadd49d126e74a16fdf3cfa4e34a3589c78fd1fbe69d07b2f9be951\"" May 8 00:16:35.250397 kubelet[2611]: E0508 00:16:35.250367 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:35.253277 containerd[1469]: time="2025-05-08T00:16:35.253235223Z" level=info msg="CreateContainer within sandbox \"efc943759dadd49d126e74a16fdf3cfa4e34a3589c78fd1fbe69d07b2f9be951\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:16:35.265442 containerd[1469]: time="2025-05-08T00:16:35.265347328Z" level=info msg="CreateContainer within sandbox \"efc943759dadd49d126e74a16fdf3cfa4e34a3589c78fd1fbe69d07b2f9be951\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ed5d697c95595aa1f3cc202549fc2561746b284dbd9980d7b51e1a41a44ef10f\"" May 8 00:16:35.265817 containerd[1469]: time="2025-05-08T00:16:35.265786467Z" level=info msg="StartContainer for \"ed5d697c95595aa1f3cc202549fc2561746b284dbd9980d7b51e1a41a44ef10f\"" May 8 00:16:35.298737 systemd[1]: Started cri-containerd-ed5d697c95595aa1f3cc202549fc2561746b284dbd9980d7b51e1a41a44ef10f.scope - libcontainer container ed5d697c95595aa1f3cc202549fc2561746b284dbd9980d7b51e1a41a44ef10f. May 8 00:16:35.329014 containerd[1469]: time="2025-05-08T00:16:35.328971804Z" level=info msg="StartContainer for \"ed5d697c95595aa1f3cc202549fc2561746b284dbd9980d7b51e1a41a44ef10f\" returns successfully" May 8 00:16:35.340209 systemd[1]: cri-containerd-ed5d697c95595aa1f3cc202549fc2561746b284dbd9980d7b51e1a41a44ef10f.scope: Deactivated successfully. May 8 00:16:35.374778 containerd[1469]: time="2025-05-08T00:16:35.374421291Z" level=info msg="shim disconnected" id=ed5d697c95595aa1f3cc202549fc2561746b284dbd9980d7b51e1a41a44ef10f namespace=k8s.io May 8 00:16:35.374778 containerd[1469]: time="2025-05-08T00:16:35.374463141Z" level=warning msg="cleaning up after shim disconnected" id=ed5d697c95595aa1f3cc202549fc2561746b284dbd9980d7b51e1a41a44ef10f namespace=k8s.io May 8 00:16:35.374778 containerd[1469]: time="2025-05-08T00:16:35.374471491Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:16:35.495719 kubelet[2611]: E0508 00:16:35.495536 2611 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:16:35.750028 kubelet[2611]: E0508 00:16:35.749940 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:35.753211 containerd[1469]: time="2025-05-08T00:16:35.753177554Z" level=info msg="CreateContainer within sandbox \"efc943759dadd49d126e74a16fdf3cfa4e34a3589c78fd1fbe69d07b2f9be951\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:16:35.763013 containerd[1469]: time="2025-05-08T00:16:35.762968705Z" level=info msg="CreateContainer within sandbox \"efc943759dadd49d126e74a16fdf3cfa4e34a3589c78fd1fbe69d07b2f9be951\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a4337d16dd95a6a6bf6628621120cc7409a758cb21b802fdb988d77645b98218\"" May 8 00:16:35.764787 containerd[1469]: time="2025-05-08T00:16:35.763486454Z" level=info msg="StartContainer for \"a4337d16dd95a6a6bf6628621120cc7409a758cb21b802fdb988d77645b98218\"" May 8 00:16:35.797734 systemd[1]: Started cri-containerd-a4337d16dd95a6a6bf6628621120cc7409a758cb21b802fdb988d77645b98218.scope - libcontainer container a4337d16dd95a6a6bf6628621120cc7409a758cb21b802fdb988d77645b98218. May 8 00:16:35.837459 containerd[1469]: time="2025-05-08T00:16:35.837369269Z" level=info msg="StartContainer for \"a4337d16dd95a6a6bf6628621120cc7409a758cb21b802fdb988d77645b98218\" returns successfully" May 8 00:16:35.852131 systemd[1]: cri-containerd-a4337d16dd95a6a6bf6628621120cc7409a758cb21b802fdb988d77645b98218.scope: Deactivated successfully. May 8 00:16:35.881238 containerd[1469]: time="2025-05-08T00:16:35.881180292Z" level=info msg="shim disconnected" id=a4337d16dd95a6a6bf6628621120cc7409a758cb21b802fdb988d77645b98218 namespace=k8s.io May 8 00:16:35.881238 containerd[1469]: time="2025-05-08T00:16:35.881234532Z" level=warning msg="cleaning up after shim disconnected" id=a4337d16dd95a6a6bf6628621120cc7409a758cb21b802fdb988d77645b98218 namespace=k8s.io May 8 00:16:35.881238 containerd[1469]: time="2025-05-08T00:16:35.881243442Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:16:36.371616 kubelet[2611]: E0508 00:16:36.371560 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:36.753790 kubelet[2611]: E0508 00:16:36.753686 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:36.756359 containerd[1469]: time="2025-05-08T00:16:36.755659735Z" level=info msg="CreateContainer within sandbox \"efc943759dadd49d126e74a16fdf3cfa4e34a3589c78fd1fbe69d07b2f9be951\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:16:36.773947 containerd[1469]: time="2025-05-08T00:16:36.771881489Z" level=info msg="CreateContainer within sandbox \"efc943759dadd49d126e74a16fdf3cfa4e34a3589c78fd1fbe69d07b2f9be951\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f838e5995195c1e1e7db75e93916bf0b46c692b5765bd49a4b2a75b760b3aef0\"" May 8 00:16:36.773947 containerd[1469]: time="2025-05-08T00:16:36.773804243Z" level=info msg="StartContainer for \"f838e5995195c1e1e7db75e93916bf0b46c692b5765bd49a4b2a75b760b3aef0\"" May 8 00:16:36.774531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3852742588.mount: Deactivated successfully. May 8 00:16:36.814740 systemd[1]: Started cri-containerd-f838e5995195c1e1e7db75e93916bf0b46c692b5765bd49a4b2a75b760b3aef0.scope - libcontainer container f838e5995195c1e1e7db75e93916bf0b46c692b5765bd49a4b2a75b760b3aef0. May 8 00:16:36.851527 containerd[1469]: time="2025-05-08T00:16:36.851493561Z" level=info msg="StartContainer for \"f838e5995195c1e1e7db75e93916bf0b46c692b5765bd49a4b2a75b760b3aef0\" returns successfully" May 8 00:16:36.853680 systemd[1]: cri-containerd-f838e5995195c1e1e7db75e93916bf0b46c692b5765bd49a4b2a75b760b3aef0.scope: Deactivated successfully. May 8 00:16:36.887169 containerd[1469]: time="2025-05-08T00:16:36.887086420Z" level=info msg="shim disconnected" id=f838e5995195c1e1e7db75e93916bf0b46c692b5765bd49a4b2a75b760b3aef0 namespace=k8s.io May 8 00:16:36.887169 containerd[1469]: time="2025-05-08T00:16:36.887147580Z" level=warning msg="cleaning up after shim disconnected" id=f838e5995195c1e1e7db75e93916bf0b46c692b5765bd49a4b2a75b760b3aef0 namespace=k8s.io May 8 00:16:36.887169 containerd[1469]: time="2025-05-08T00:16:36.887167140Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:16:37.067539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f838e5995195c1e1e7db75e93916bf0b46c692b5765bd49a4b2a75b760b3aef0-rootfs.mount: Deactivated successfully. May 8 00:16:37.755761 kubelet[2611]: E0508 00:16:37.755733 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:37.757794 containerd[1469]: time="2025-05-08T00:16:37.757748998Z" level=info msg="CreateContainer within sandbox \"efc943759dadd49d126e74a16fdf3cfa4e34a3589c78fd1fbe69d07b2f9be951\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:16:37.773217 containerd[1469]: time="2025-05-08T00:16:37.773170545Z" level=info msg="CreateContainer within sandbox \"efc943759dadd49d126e74a16fdf3cfa4e34a3589c78fd1fbe69d07b2f9be951\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5380b345298a8b6d33832ab69d5bd5cb998e2edac1aa29c96ec4353e049615e5\"" May 8 00:16:37.774378 containerd[1469]: time="2025-05-08T00:16:37.773925392Z" level=info msg="StartContainer for \"5380b345298a8b6d33832ab69d5bd5cb998e2edac1aa29c96ec4353e049615e5\"" May 8 00:16:37.806272 systemd[1]: run-containerd-runc-k8s.io-5380b345298a8b6d33832ab69d5bd5cb998e2edac1aa29c96ec4353e049615e5-runc.Kv15UO.mount: Deactivated successfully. May 8 00:16:37.817724 systemd[1]: Started cri-containerd-5380b345298a8b6d33832ab69d5bd5cb998e2edac1aa29c96ec4353e049615e5.scope - libcontainer container 5380b345298a8b6d33832ab69d5bd5cb998e2edac1aa29c96ec4353e049615e5. May 8 00:16:37.842257 systemd[1]: cri-containerd-5380b345298a8b6d33832ab69d5bd5cb998e2edac1aa29c96ec4353e049615e5.scope: Deactivated successfully. May 8 00:16:37.843305 containerd[1469]: time="2025-05-08T00:16:37.843251899Z" level=info msg="StartContainer for \"5380b345298a8b6d33832ab69d5bd5cb998e2edac1aa29c96ec4353e049615e5\" returns successfully" May 8 00:16:37.863417 containerd[1469]: time="2025-05-08T00:16:37.863367272Z" level=info msg="shim disconnected" id=5380b345298a8b6d33832ab69d5bd5cb998e2edac1aa29c96ec4353e049615e5 namespace=k8s.io May 8 00:16:37.863417 containerd[1469]: time="2025-05-08T00:16:37.863411332Z" level=warning msg="cleaning up after shim disconnected" id=5380b345298a8b6d33832ab69d5bd5cb998e2edac1aa29c96ec4353e049615e5 namespace=k8s.io May 8 00:16:37.863417 containerd[1469]: time="2025-05-08T00:16:37.863419932Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:16:38.064586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5380b345298a8b6d33832ab69d5bd5cb998e2edac1aa29c96ec4353e049615e5-rootfs.mount: Deactivated successfully. May 8 00:16:38.760184 kubelet[2611]: E0508 00:16:38.760133 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:38.763251 containerd[1469]: time="2025-05-08T00:16:38.763206961Z" level=info msg="CreateContainer within sandbox \"efc943759dadd49d126e74a16fdf3cfa4e34a3589c78fd1fbe69d07b2f9be951\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:16:38.791135 containerd[1469]: time="2025-05-08T00:16:38.791063405Z" level=info msg="CreateContainer within sandbox \"efc943759dadd49d126e74a16fdf3cfa4e34a3589c78fd1fbe69d07b2f9be951\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"43b4578819d4fbc3275a33f3125187aa57be082135155a05be53494803da5efb\"" May 8 00:16:38.793342 containerd[1469]: time="2025-05-08T00:16:38.793296978Z" level=info msg="StartContainer for \"43b4578819d4fbc3275a33f3125187aa57be082135155a05be53494803da5efb\"" May 8 00:16:38.821727 systemd[1]: Started cri-containerd-43b4578819d4fbc3275a33f3125187aa57be082135155a05be53494803da5efb.scope - libcontainer container 43b4578819d4fbc3275a33f3125187aa57be082135155a05be53494803da5efb. May 8 00:16:38.862296 containerd[1469]: time="2025-05-08T00:16:38.862232268Z" level=info msg="StartContainer for \"43b4578819d4fbc3275a33f3125187aa57be082135155a05be53494803da5efb\" returns successfully" May 8 00:16:39.204414 kubelet[2611]: I0508 00:16:39.204277 2611 setters.go:602] "Node became not ready" node="172-232-9-214" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:16:39Z","lastTransitionTime":"2025-05-08T00:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:16:39.333627 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:16:39.764180 kubelet[2611]: E0508 00:16:39.764140 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:39.781399 kubelet[2611]: I0508 00:16:39.780436 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bkbz6" podStartSLOduration=6.780411991 podStartE2EDuration="6.780411991s" podCreationTimestamp="2025-05-08 00:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:16:39.779244863 +0000 UTC m=+174.507018710" watchObservedRunningTime="2025-05-08 00:16:39.780411991 +0000 UTC m=+174.508185828" May 8 00:16:41.169296 kubelet[2611]: E0508 00:16:41.169244 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:42.227517 systemd-networkd[1388]: lxc_health: Link UP May 8 00:16:42.234825 systemd-networkd[1388]: lxc_health: Gained carrier May 8 00:16:43.068007 systemd[1]: run-containerd-runc-k8s.io-43b4578819d4fbc3275a33f3125187aa57be082135155a05be53494803da5efb-runc.bxGz8Y.mount: Deactivated successfully. May 8 00:16:43.170376 kubelet[2611]: E0508 00:16:43.170286 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:43.774628 kubelet[2611]: E0508 00:16:43.773089 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:44.281798 systemd-networkd[1388]: lxc_health: Gained IPv6LL May 8 00:16:44.774649 kubelet[2611]: E0508 00:16:44.774370 2611 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" May 8 00:16:45.214319 systemd[1]: run-containerd-runc-k8s.io-43b4578819d4fbc3275a33f3125187aa57be082135155a05be53494803da5efb-runc.icxYYV.mount: Deactivated successfully. May 8 00:16:45.363698 containerd[1469]: time="2025-05-08T00:16:45.363650981Z" level=info msg="StopPodSandbox for \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\"" May 8 00:16:45.365624 containerd[1469]: time="2025-05-08T00:16:45.364183039Z" level=info msg="TearDown network for sandbox \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\" successfully" May 8 00:16:45.365624 containerd[1469]: time="2025-05-08T00:16:45.364200649Z" level=info msg="StopPodSandbox for \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\" returns successfully" May 8 00:16:45.366204 containerd[1469]: time="2025-05-08T00:16:45.366129385Z" level=info msg="RemovePodSandbox for \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\"" May 8 00:16:45.366204 containerd[1469]: time="2025-05-08T00:16:45.366151214Z" level=info msg="Forcibly stopping sandbox \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\"" May 8 00:16:45.366352 containerd[1469]: time="2025-05-08T00:16:45.366335814Z" level=info msg="TearDown network for sandbox \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\" successfully" May 8 00:16:45.370728 containerd[1469]: time="2025-05-08T00:16:45.370697783Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:16:45.370926 containerd[1469]: time="2025-05-08T00:16:45.370909922Z" level=info msg="RemovePodSandbox \"1cb06d3f3d7705971669b3f8385b2f35d4e5c3fc1d8da560d26e4bdd67c92b05\" returns successfully" May 8 00:16:45.373336 containerd[1469]: time="2025-05-08T00:16:45.373198647Z" level=info msg="StopPodSandbox for \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\"" May 8 00:16:45.374082 containerd[1469]: time="2025-05-08T00:16:45.374064705Z" level=info msg="TearDown network for sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" successfully" May 8 00:16:45.374170 containerd[1469]: time="2025-05-08T00:16:45.374156605Z" level=info msg="StopPodSandbox for \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" returns successfully" May 8 00:16:45.375159 containerd[1469]: time="2025-05-08T00:16:45.375139292Z" level=info msg="RemovePodSandbox for \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\"" May 8 00:16:45.375396 containerd[1469]: time="2025-05-08T00:16:45.375381641Z" level=info msg="Forcibly stopping sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\"" May 8 00:16:45.376803 containerd[1469]: time="2025-05-08T00:16:45.375678281Z" level=info msg="TearDown network for sandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" successfully" May 8 00:16:45.380231 containerd[1469]: time="2025-05-08T00:16:45.380197940Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:16:45.380335 containerd[1469]: time="2025-05-08T00:16:45.380320310Z" level=info msg="RemovePodSandbox \"6fa49a718f512852bf8be468d5a9ee4fcf0419c8c1161d5327456a58faf40407\" returns successfully" May 8 00:16:47.324502 systemd[1]: run-containerd-runc-k8s.io-43b4578819d4fbc3275a33f3125187aa57be082135155a05be53494803da5efb-runc.u1Mezs.mount: Deactivated successfully. May 8 00:16:47.430061 sshd[4403]: Connection closed by 139.178.89.65 port 58158 May 8 00:16:47.430789 sshd-session[4400]: pam_unix(sshd:session): session closed for user core May 8 00:16:47.434966 systemd[1]: sshd@25-172.232.9.214:22-139.178.89.65:58158.service: Deactivated successfully. May 8 00:16:47.437316 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:16:47.438014 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. May 8 00:16:47.439387 systemd-logind[1455]: Removed session 25. May 8 00:16:47.710982 kubelet[2611]: E0508 00:16:47.710923 2611 server.go:321] "Unable to authenticate the request due to an error" err="[invalid bearer token, invalid signature, no keys found]"