May 8 00:19:57.911222 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:19:27 -00 2025 May 8 00:19:57.911245 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:19:57.911254 kernel: BIOS-provided physical RAM map: May 8 00:19:57.911260 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 8 00:19:57.911265 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 8 00:19:57.911274 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 8 00:19:57.911281 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 8 00:19:57.911287 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 8 00:19:57.911292 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:19:57.911298 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 8 00:19:57.911304 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:19:57.911309 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 8 00:19:57.911315 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 8 00:19:57.911321 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:19:57.911330 kernel: NX (Execute Disable) protection: active May 8 00:19:57.911336 kernel: APIC: Static calls initialized May 8 00:19:57.911342 kernel: SMBIOS 2.8 present. May 8 00:19:57.911348 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 8 00:19:57.911354 kernel: Hypervisor detected: KVM May 8 00:19:57.911363 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:19:57.911369 kernel: kvm-clock: using sched offset of 4662688272 cycles May 8 00:19:57.911375 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:19:57.911381 kernel: tsc: Detected 1999.996 MHz processor May 8 00:19:57.911388 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:19:57.911394 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:19:57.911401 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 8 00:19:57.911407 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 8 00:19:57.911413 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:19:57.911422 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 8 00:19:57.911428 kernel: Using GB pages for direct mapping May 8 00:19:57.911434 kernel: ACPI: Early table checksum verification disabled May 8 00:19:57.911441 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 8 00:19:57.911447 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:57.911454 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:57.911460 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:57.911466 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 8 00:19:57.911472 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:57.911481 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:57.911487 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:57.911494 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:19:57.911504 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 8 00:19:57.911510 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 8 00:19:57.911517 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 8 00:19:57.911523 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 8 00:19:57.911532 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 8 00:19:57.911538 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 8 00:19:57.911545 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 8 00:19:57.911551 kernel: No NUMA configuration found May 8 00:19:57.911558 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 8 00:19:57.911564 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] May 8 00:19:57.911571 kernel: Zone ranges: May 8 00:19:57.911577 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:19:57.911586 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 8 00:19:57.911593 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 8 00:19:57.911599 kernel: Movable zone start for each node May 8 00:19:57.911606 kernel: Early memory node ranges May 8 00:19:57.911612 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 8 00:19:57.911618 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 8 00:19:57.911625 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 8 00:19:57.911631 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 8 00:19:57.911637 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:19:57.911647 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 00:19:57.911653 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 8 00:19:57.911660 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:19:57.911666 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:19:57.911672 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:19:57.911679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:19:57.911685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:19:57.911692 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:19:57.911698 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:19:57.911707 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:19:57.911714 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:19:57.911720 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:19:57.911726 kernel: TSC deadline timer available May 8 00:19:57.911733 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 8 00:19:57.911739 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:19:57.911745 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:19:57.911752 kernel: kvm-guest: setup PV sched yield May 8 00:19:57.911758 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 8 00:19:57.911767 kernel: Booting paravirtualized kernel on KVM May 8 00:19:57.911774 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:19:57.911780 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 8 00:19:57.911786 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 May 8 00:19:57.911793 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 May 8 00:19:57.911799 kernel: pcpu-alloc: [0] 0 1 May 8 00:19:57.911805 kernel: kvm-guest: PV spinlocks enabled May 8 00:19:57.911812 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:19:57.911819 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:19:57.911829 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:19:57.911835 kernel: random: crng init done May 8 00:19:57.911841 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:19:57.911848 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:19:57.911854 kernel: Fallback order for Node 0: 0 May 8 00:19:57.911860 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 8 00:19:57.911867 kernel: Policy zone: Normal May 8 00:19:57.911873 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:19:57.911882 kernel: software IO TLB: area num 2. May 8 00:19:57.911889 kernel: Memory: 3964152K/4193772K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 229360K reserved, 0K cma-reserved) May 8 00:19:57.911895 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 8 00:19:57.911901 kernel: ftrace: allocating 37918 entries in 149 pages May 8 00:19:57.911908 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:19:57.911914 kernel: Dynamic Preempt: voluntary May 8 00:19:57.911921 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:19:57.911928 kernel: rcu: RCU event tracing is enabled. May 8 00:19:57.911935 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 8 00:19:57.911944 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:19:57.911950 kernel: Rude variant of Tasks RCU enabled. May 8 00:19:57.911957 kernel: Tracing variant of Tasks RCU enabled. May 8 00:19:57.911963 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:19:57.911970 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 8 00:19:57.911976 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 8 00:19:57.911983 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:19:57.911989 kernel: Console: colour VGA+ 80x25 May 8 00:19:57.911995 kernel: printk: console [tty0] enabled May 8 00:19:57.913037 kernel: printk: console [ttyS0] enabled May 8 00:19:57.913048 kernel: ACPI: Core revision 20230628 May 8 00:19:57.913055 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:19:57.913062 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:19:57.913079 kernel: x2apic enabled May 8 00:19:57.913088 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:19:57.913095 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:19:57.913102 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:19:57.913109 kernel: kvm-guest: setup PV IPIs May 8 00:19:57.913115 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:19:57.913122 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:19:57.913129 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999996) May 8 00:19:57.913138 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:19:57.913145 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:19:57.913152 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:19:57.913158 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:19:57.913165 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:19:57.913174 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:19:57.913181 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:19:57.913188 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 8 00:19:57.913195 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:19:57.913201 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:19:57.913208 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:19:57.913216 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:19:57.913223 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:19:57.913232 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:19:57.913239 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:19:57.913246 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:19:57.913253 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 8 00:19:57.913259 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:19:57.913266 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 8 00:19:57.913273 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 8 00:19:57.913279 kernel: Freeing SMP alternatives memory: 32K May 8 00:19:57.913286 kernel: pid_max: default: 32768 minimum: 301 May 8 00:19:57.913295 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:19:57.913302 kernel: landlock: Up and running. May 8 00:19:57.913309 kernel: SELinux: Initializing. May 8 00:19:57.913315 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:19:57.913322 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:19:57.913329 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 8 00:19:57.913336 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:19:57.913343 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:19:57.913349 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 00:19:57.913359 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:19:57.913366 kernel: ... version: 0 May 8 00:19:57.913372 kernel: ... bit width: 48 May 8 00:19:57.913379 kernel: ... generic registers: 6 May 8 00:19:57.913386 kernel: ... value mask: 0000ffffffffffff May 8 00:19:57.913392 kernel: ... max period: 00007fffffffffff May 8 00:19:57.913399 kernel: ... fixed-purpose events: 0 May 8 00:19:57.913406 kernel: ... event mask: 000000000000003f May 8 00:19:57.913412 kernel: signal: max sigframe size: 3376 May 8 00:19:57.913422 kernel: rcu: Hierarchical SRCU implementation. May 8 00:19:57.913429 kernel: rcu: Max phase no-delay instances is 400. May 8 00:19:57.913436 kernel: smp: Bringing up secondary CPUs ... May 8 00:19:57.913443 kernel: smpboot: x86: Booting SMP configuration: May 8 00:19:57.913449 kernel: .... node #0, CPUs: #1 May 8 00:19:57.913456 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:19:57.913463 kernel: smpboot: Max logical packages: 1 May 8 00:19:57.913469 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) May 8 00:19:57.913476 kernel: devtmpfs: initialized May 8 00:19:57.913486 kernel: x86/mm: Memory block size: 128MB May 8 00:19:57.913493 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:19:57.913500 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 8 00:19:57.913506 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:19:57.913513 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:19:57.913520 kernel: audit: initializing netlink subsys (disabled) May 8 00:19:57.913526 kernel: audit: type=2000 audit(1746663597.157:1): state=initialized audit_enabled=0 res=1 May 8 00:19:57.913533 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:19:57.913540 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:19:57.913549 kernel: cpuidle: using governor menu May 8 00:19:57.913556 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:19:57.913563 kernel: dca service started, version 1.12.1 May 8 00:19:57.913570 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:19:57.913577 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 8 00:19:57.913583 kernel: PCI: Using configuration type 1 for base access May 8 00:19:57.913590 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:19:57.913597 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:19:57.913603 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:19:57.913613 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:19:57.913620 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:19:57.913626 kernel: ACPI: Added _OSI(Module Device) May 8 00:19:57.913633 kernel: ACPI: Added _OSI(Processor Device) May 8 00:19:57.913640 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:19:57.913646 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:19:57.913653 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:19:57.913660 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:19:57.913667 kernel: ACPI: Interpreter enabled May 8 00:19:57.913676 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:19:57.913683 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:19:57.913690 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:19:57.913696 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:19:57.913703 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:19:57.913710 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:19:57.913882 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:19:57.914047 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:19:57.914240 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:19:57.914253 kernel: PCI host bridge to bus 0000:00 May 8 00:19:57.914376 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:19:57.914484 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:19:57.914604 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:19:57.914712 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 8 00:19:57.914819 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:19:57.914929 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 8 00:19:57.918796 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:19:57.918940 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:19:57.919102 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:19:57.919221 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 8 00:19:57.919335 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 8 00:19:57.919453 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 8 00:19:57.919564 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:19:57.919687 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 May 8 00:19:57.919800 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] May 8 00:19:57.919915 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 8 00:19:57.920122 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 8 00:19:57.920257 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 8 00:19:57.920382 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 8 00:19:57.920499 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 8 00:19:57.920612 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 8 00:19:57.920725 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 8 00:19:57.920851 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:19:57.920965 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:19:57.921131 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:19:57.921256 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] May 8 00:19:57.921370 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] May 8 00:19:57.921493 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:19:57.921607 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 8 00:19:57.921617 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:19:57.921624 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:19:57.921631 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:19:57.921642 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:19:57.921649 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:19:57.921655 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:19:57.921662 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:19:57.921669 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:19:57.921676 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:19:57.921683 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:19:57.921689 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:19:57.921696 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:19:57.921705 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:19:57.921712 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:19:57.921719 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:19:57.921726 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:19:57.921732 kernel: iommu: Default domain type: Translated May 8 00:19:57.921739 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:19:57.921746 kernel: PCI: Using ACPI for IRQ routing May 8 00:19:57.921753 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:19:57.921759 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 8 00:19:57.921769 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 8 00:19:57.921881 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:19:57.921994 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:19:57.922131 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:19:57.922142 kernel: vgaarb: loaded May 8 00:19:57.922149 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:19:57.922156 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:19:57.922163 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:19:57.922173 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:19:57.922180 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:19:57.922187 kernel: pnp: PnP ACPI init May 8 00:19:57.922309 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:19:57.922320 kernel: pnp: PnP ACPI: found 5 devices May 8 00:19:57.922327 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:19:57.922334 kernel: NET: Registered PF_INET protocol family May 8 00:19:57.922341 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:19:57.922351 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:19:57.922358 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:19:57.922365 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:19:57.922372 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:19:57.922378 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:19:57.922385 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:19:57.922392 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:19:57.922399 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:19:57.922405 kernel: NET: Registered PF_XDP protocol family May 8 00:19:57.922514 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:19:57.922619 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:19:57.922742 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:19:57.922847 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 8 00:19:57.922951 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:19:57.923346 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 8 00:19:57.923358 kernel: PCI: CLS 0 bytes, default 64 May 8 00:19:57.923365 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 8 00:19:57.923372 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 8 00:19:57.923383 kernel: Initialise system trusted keyrings May 8 00:19:57.923390 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:19:57.923397 kernel: Key type asymmetric registered May 8 00:19:57.923403 kernel: Asymmetric key parser 'x509' registered May 8 00:19:57.923410 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:19:57.923417 kernel: io scheduler mq-deadline registered May 8 00:19:57.923423 kernel: io scheduler kyber registered May 8 00:19:57.923430 kernel: io scheduler bfq registered May 8 00:19:57.923437 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:19:57.923447 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:19:57.923454 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:19:57.923461 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:19:57.923468 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:19:57.923475 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:19:57.923482 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:19:57.923488 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:19:57.923495 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:19:57.923615 kernel: rtc_cmos 00:03: RTC can wake from S4 May 8 00:19:57.923729 kernel: rtc_cmos 00:03: registered as rtc0 May 8 00:19:57.923835 kernel: rtc_cmos 00:03: setting system clock to 2025-05-08T00:19:57 UTC (1746663597) May 8 00:19:57.923941 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:19:57.923951 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:19:57.923958 kernel: NET: Registered PF_INET6 protocol family May 8 00:19:57.923964 kernel: Segment Routing with IPv6 May 8 00:19:57.923971 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:19:57.923978 kernel: NET: Registered PF_PACKET protocol family May 8 00:19:57.923988 kernel: Key type dns_resolver registered May 8 00:19:57.923995 kernel: IPI shorthand broadcast: enabled May 8 00:19:57.924002 kernel: sched_clock: Marking stable (688004771, 199660317)->(948145805, -60480717) May 8 00:19:57.924073 kernel: registered taskstats version 1 May 8 00:19:57.924080 kernel: Loading compiled-in X.509 certificates May 8 00:19:57.924087 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: dac8423f6f9fa2fb5f636925d45d7c2572b3a9b6' May 8 00:19:57.924093 kernel: Key type .fscrypt registered May 8 00:19:57.924100 kernel: Key type fscrypt-provisioning registered May 8 00:19:57.924107 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:19:57.924117 kernel: ima: Allocated hash algorithm: sha1 May 8 00:19:57.924124 kernel: ima: No architecture policies found May 8 00:19:57.924131 kernel: clk: Disabling unused clocks May 8 00:19:57.924137 kernel: Freeing unused kernel image (initmem) memory: 43484K May 8 00:19:57.924144 kernel: Write protecting the kernel read-only data: 38912k May 8 00:19:57.924152 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 8 00:19:57.924158 kernel: Run /init as init process May 8 00:19:57.924165 kernel: with arguments: May 8 00:19:57.924174 kernel: /init May 8 00:19:57.924181 kernel: with environment: May 8 00:19:57.924187 kernel: HOME=/ May 8 00:19:57.924194 kernel: TERM=linux May 8 00:19:57.924201 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:19:57.924209 systemd[1]: Successfully made /usr/ read-only. May 8 00:19:57.924219 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:19:57.924227 systemd[1]: Detected virtualization kvm. May 8 00:19:57.924236 systemd[1]: Detected architecture x86-64. May 8 00:19:57.924243 systemd[1]: Running in initrd. May 8 00:19:57.924250 systemd[1]: No hostname configured, using default hostname. May 8 00:19:57.924258 systemd[1]: Hostname set to . May 8 00:19:57.924265 systemd[1]: Initializing machine ID from random generator. May 8 00:19:57.924287 systemd[1]: Queued start job for default target initrd.target. May 8 00:19:57.924300 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:19:57.924307 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:19:57.924316 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:19:57.924323 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:19:57.924331 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:19:57.924339 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:19:57.924348 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:19:57.924358 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:19:57.924366 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:19:57.924374 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:19:57.924381 systemd[1]: Reached target paths.target - Path Units. May 8 00:19:57.924388 systemd[1]: Reached target slices.target - Slice Units. May 8 00:19:57.924396 systemd[1]: Reached target swap.target - Swaps. May 8 00:19:57.924403 systemd[1]: Reached target timers.target - Timer Units. May 8 00:19:57.924411 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:19:57.924421 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:19:57.924429 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:19:57.924436 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 8 00:19:57.924444 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:19:57.924452 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:19:57.924459 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:19:57.924467 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:19:57.924474 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:19:57.924482 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:19:57.924492 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:19:57.924499 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:19:57.924507 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:19:57.924514 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:19:57.924522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:19:57.924530 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:19:57.924540 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:19:57.924570 systemd-journald[178]: Collecting audit messages is disabled. May 8 00:19:57.924591 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:19:57.924599 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:19:57.924607 systemd-journald[178]: Journal started May 8 00:19:57.924626 systemd-journald[178]: Runtime Journal (/run/log/journal/5bd7a3ed273b4532ae3111c3ce252013) is 8M, max 78.3M, 70.3M free. May 8 00:19:57.905527 systemd-modules-load[179]: Inserted module 'overlay' May 8 00:19:57.981220 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:19:57.981237 kernel: Bridge firewalling registered May 8 00:19:57.981248 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:19:57.935317 systemd-modules-load[179]: Inserted module 'br_netfilter' May 8 00:19:57.981993 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:19:57.982969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:19:57.984162 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:19:57.991135 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:19:57.992629 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:19:57.999170 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:19:58.012706 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:19:58.025000 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:19:58.026118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:19:58.030219 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:19:58.034414 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:19:58.036654 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:19:58.044136 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:19:58.051326 dracut-cmdline[212]: dracut-dracut-053 May 8 00:19:58.054453 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=90f0413c3d850985bb1e645e67699e9890362068cb417837636fe4022f4be979 May 8 00:19:58.080378 systemd-resolved[215]: Positive Trust Anchors: May 8 00:19:58.080392 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:19:58.080418 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:19:58.086224 systemd-resolved[215]: Defaulting to hostname 'linux'. May 8 00:19:58.087223 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:19:58.088100 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:19:58.126052 kernel: SCSI subsystem initialized May 8 00:19:58.135030 kernel: Loading iSCSI transport class v2.0-870. May 8 00:19:58.146052 kernel: iscsi: registered transport (tcp) May 8 00:19:58.167259 kernel: iscsi: registered transport (qla4xxx) May 8 00:19:58.167329 kernel: QLogic iSCSI HBA Driver May 8 00:19:58.209629 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:19:58.217180 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:19:58.243823 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:19:58.243885 kernel: device-mapper: uevent: version 1.0.3 May 8 00:19:58.244525 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:19:58.286049 kernel: raid6: avx2x4 gen() 31785 MB/s May 8 00:19:58.304029 kernel: raid6: avx2x2 gen() 30969 MB/s May 8 00:19:58.322547 kernel: raid6: avx2x1 gen() 22592 MB/s May 8 00:19:58.322575 kernel: raid6: using algorithm avx2x4 gen() 31785 MB/s May 8 00:19:58.341521 kernel: raid6: .... xor() 5274 MB/s, rmw enabled May 8 00:19:58.341553 kernel: raid6: using avx2x2 recovery algorithm May 8 00:19:58.361034 kernel: xor: automatically using best checksumming function avx May 8 00:19:58.484040 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:19:58.494595 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:19:58.504156 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:19:58.517489 systemd-udevd[398]: Using default interface naming scheme 'v255'. May 8 00:19:58.522595 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:19:58.532130 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:19:58.545202 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation May 8 00:19:58.577746 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:19:58.583150 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:19:58.643897 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:19:58.652202 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:19:58.664379 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:19:58.669841 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:19:58.670422 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:19:58.671392 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:19:58.679136 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:19:58.696126 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:19:58.727027 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:19:58.730184 kernel: libata version 3.00 loaded. May 8 00:19:58.808146 kernel: scsi host0: Virtio SCSI HBA May 8 00:19:58.825040 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:19:58.832723 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 8 00:19:58.832776 kernel: AES CTR mode by8 optimization enabled May 8 00:19:58.832788 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:19:58.872396 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:19:58.872412 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:19:58.872567 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:19:58.872705 kernel: scsi host1: ahci May 8 00:19:58.873049 kernel: scsi host2: ahci May 8 00:19:58.873205 kernel: scsi host3: ahci May 8 00:19:58.873347 kernel: scsi host4: ahci May 8 00:19:58.873488 kernel: scsi host5: ahci May 8 00:19:58.873625 kernel: scsi host6: ahci May 8 00:19:58.873757 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 May 8 00:19:58.873768 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 May 8 00:19:58.873777 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 May 8 00:19:58.873787 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 May 8 00:19:58.873796 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 May 8 00:19:58.873809 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 May 8 00:19:58.833519 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:19:58.833633 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:19:58.834404 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:19:58.834985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:19:58.835121 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:19:58.835835 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:19:58.843740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:19:58.930577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:19:58.939187 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:19:58.955909 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:19:59.187818 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 8 00:19:59.187898 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:19:59.187915 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:19:59.187929 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:19:59.187943 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:19:59.187957 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:19:59.211518 kernel: sd 0:0:0:0: Power-on or device reset occurred May 8 00:19:59.252316 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 8 00:19:59.252485 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 00:19:59.252627 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 8 00:19:59.252764 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 8 00:19:59.252901 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:19:59.252911 kernel: GPT:9289727 != 167739391 May 8 00:19:59.252921 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:19:59.252930 kernel: GPT:9289727 != 167739391 May 8 00:19:59.252943 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:19:59.252952 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:19:59.252961 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 00:19:59.292435 kernel: BTRFS: device fsid 1c9931ea-0995-4065-8a57-32743027822a devid 1 transid 42 /dev/sda3 scanned by (udev-worker) (447) May 8 00:19:59.292478 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (446) May 8 00:19:59.311410 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 8 00:19:59.321240 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 8 00:19:59.328074 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 8 00:19:59.328678 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 8 00:19:59.337728 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 8 00:19:59.343123 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:19:59.347856 disk-uuid[570]: Primary Header is updated. May 8 00:19:59.347856 disk-uuid[570]: Secondary Entries is updated. May 8 00:19:59.347856 disk-uuid[570]: Secondary Header is updated. May 8 00:19:59.353038 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:19:59.359896 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:20:00.362052 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:20:00.362596 disk-uuid[571]: The operation has completed successfully. May 8 00:20:00.416722 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:20:00.416843 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:20:00.445181 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:20:00.448489 sh[585]: Success May 8 00:20:00.461751 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:20:00.508336 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:20:00.517102 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:20:00.518439 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:20:00.543178 kernel: BTRFS info (device dm-0): first mount of filesystem 1c9931ea-0995-4065-8a57-32743027822a May 8 00:20:00.543213 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:20:00.545105 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:20:00.547081 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:20:00.549571 kernel: BTRFS info (device dm-0): using free space tree May 8 00:20:00.557034 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:20:00.557725 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:20:00.558765 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:20:00.564122 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:20:00.568110 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:20:00.589705 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:20:00.589736 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:20:00.589748 kernel: BTRFS info (device sda6): using free space tree May 8 00:20:00.595984 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:20:00.596029 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:20:00.602042 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:20:00.603734 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:20:00.609218 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:20:00.678805 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:20:00.680880 ignition[691]: Ignition 2.20.0 May 8 00:20:00.680892 ignition[691]: Stage: fetch-offline May 8 00:20:00.680922 ignition[691]: no configs at "/usr/lib/ignition/base.d" May 8 00:20:00.680932 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:20:00.681001 ignition[691]: parsed url from cmdline: "" May 8 00:20:00.681019 ignition[691]: no config URL provided May 8 00:20:00.681025 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:20:00.681034 ignition[691]: no config at "/usr/lib/ignition/user.ign" May 8 00:20:00.681038 ignition[691]: failed to fetch config: resource requires networking May 8 00:20:00.681183 ignition[691]: Ignition finished successfully May 8 00:20:00.688161 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:20:00.690249 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:20:00.711155 systemd-networkd[769]: lo: Link UP May 8 00:20:00.711167 systemd-networkd[769]: lo: Gained carrier May 8 00:20:00.713118 systemd-networkd[769]: Enumeration completed May 8 00:20:00.713194 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:20:00.713778 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:20:00.713782 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:20:00.714576 systemd[1]: Reached target network.target - Network. May 8 00:20:00.715764 systemd-networkd[769]: eth0: Link UP May 8 00:20:00.715768 systemd-networkd[769]: eth0: Gained carrier May 8 00:20:00.715775 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:20:00.723131 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 8 00:20:00.733917 ignition[773]: Ignition 2.20.0 May 8 00:20:00.733927 ignition[773]: Stage: fetch May 8 00:20:00.734086 ignition[773]: no configs at "/usr/lib/ignition/base.d" May 8 00:20:00.734097 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:20:00.734182 ignition[773]: parsed url from cmdline: "" May 8 00:20:00.734186 ignition[773]: no config URL provided May 8 00:20:00.734191 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:20:00.734199 ignition[773]: no config at "/usr/lib/ignition/user.ign" May 8 00:20:00.734220 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 May 8 00:20:00.734361 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 8 00:20:00.935100 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 May 8 00:20:00.935265 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 8 00:20:01.187105 systemd-networkd[769]: eth0: DHCPv4 address 172.232.0.241/24, gateway 172.232.0.1 acquired from 23.210.200.54 May 8 00:20:01.335489 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 May 8 00:20:01.420511 ignition[773]: PUT result: OK May 8 00:20:01.420599 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 May 8 00:20:01.563548 ignition[773]: GET result: OK May 8 00:20:01.563674 ignition[773]: parsing config with SHA512: 83da90e9c6e974b46c7575c0998f011e1826365556ba97796bbcbafb21b5f00c8d79adf9876893b6db97e288578e024cdad9ba3f4283e28af9cc11d6ba1c5f66 May 8 00:20:01.567301 unknown[773]: fetched base config from "system" May 8 00:20:01.567310 unknown[773]: fetched base config from "system" May 8 00:20:01.567746 ignition[773]: fetch: fetch complete May 8 00:20:01.567316 unknown[773]: fetched user config from "akamai" May 8 00:20:01.567751 ignition[773]: fetch: fetch passed May 8 00:20:01.568157 ignition[773]: Ignition finished successfully May 8 00:20:01.571094 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 8 00:20:01.577160 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:20:01.591282 ignition[781]: Ignition 2.20.0 May 8 00:20:01.591291 ignition[781]: Stage: kargs May 8 00:20:01.591417 ignition[781]: no configs at "/usr/lib/ignition/base.d" May 8 00:20:01.593588 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:20:01.591428 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:20:01.592099 ignition[781]: kargs: kargs passed May 8 00:20:01.592134 ignition[781]: Ignition finished successfully May 8 00:20:01.600131 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:20:01.613990 ignition[788]: Ignition 2.20.0 May 8 00:20:01.614658 ignition[788]: Stage: disks May 8 00:20:01.614785 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 8 00:20:01.614796 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:20:01.616662 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:20:01.615468 ignition[788]: disks: disks passed May 8 00:20:01.618435 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:20:01.615502 ignition[788]: Ignition finished successfully May 8 00:20:01.642048 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:20:01.643380 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:20:01.644579 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:20:01.646066 systemd[1]: Reached target basic.target - Basic System. May 8 00:20:01.658208 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:20:01.672614 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:20:01.676627 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:20:01.683083 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:20:01.757029 kernel: EXT4-fs (sda9): mounted filesystem 369e2962-701e-4244-8c1c-27f8fa83bc64 r/w with ordered data mode. Quota mode: none. May 8 00:20:01.757478 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:20:01.758394 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:20:01.766099 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:20:01.769075 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:20:01.770063 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:20:01.770102 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:20:01.770124 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:20:01.780125 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (804) May 8 00:20:01.780148 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:20:01.784382 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:20:01.784404 kernel: BTRFS info (device sda6): using free space tree May 8 00:20:01.784817 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:20:01.791263 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:20:01.791291 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:20:01.794268 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:20:01.796209 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:20:01.836773 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:20:01.842212 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory May 8 00:20:01.846527 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:20:01.849918 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:20:01.933998 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:20:01.941093 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:20:01.944751 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:20:01.951180 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:20:01.952184 kernel: BTRFS info (device sda6): last unmount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:20:01.975966 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:20:01.978168 ignition[916]: INFO : Ignition 2.20.0 May 8 00:20:01.978168 ignition[916]: INFO : Stage: mount May 8 00:20:01.978168 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:20:01.978168 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:20:01.981270 ignition[916]: INFO : mount: mount passed May 8 00:20:01.981270 ignition[916]: INFO : Ignition finished successfully May 8 00:20:01.980918 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:20:01.992171 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:20:02.734309 systemd-networkd[769]: eth0: Gained IPv6LL May 8 00:20:02.763138 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:20:02.778313 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (928) May 8 00:20:02.778394 kernel: BTRFS info (device sda6): first mount of filesystem 13774eeb-24b8-4f6d-a245-c0facb6e43f9 May 8 00:20:02.781098 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:20:02.783857 kernel: BTRFS info (device sda6): using free space tree May 8 00:20:02.789019 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:20:02.789045 kernel: BTRFS info (device sda6): auto enabling async discard May 8 00:20:02.792423 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:20:02.814062 ignition[945]: INFO : Ignition 2.20.0 May 8 00:20:02.814957 ignition[945]: INFO : Stage: files May 8 00:20:02.815628 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:20:02.817059 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:20:02.817843 ignition[945]: DEBUG : files: compiled without relabeling support, skipping May 8 00:20:02.818590 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:20:02.818590 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:20:02.820302 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:20:02.821413 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:20:02.821413 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:20:02.820908 unknown[945]: wrote ssh authorized keys file for user: core May 8 00:20:02.823883 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:20:02.823883 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:20:03.133720 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:20:03.459628 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:20:03.459628 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:20:03.461623 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 00:20:03.764465 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:20:03.851321 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:20:03.852782 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 00:20:04.070584 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:20:04.358899 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:20:04.358899 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:20:04.382840 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:20:04.382840 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:20:04.382840 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:20:04.382840 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:20:04.382840 ignition[945]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 8 00:20:04.382840 ignition[945]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 8 00:20:04.382840 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:20:04.382840 ignition[945]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 8 00:20:04.382840 ignition[945]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:20:04.382840 ignition[945]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:20:04.382840 ignition[945]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:20:04.382840 ignition[945]: INFO : files: files passed May 8 00:20:04.382840 ignition[945]: INFO : Ignition finished successfully May 8 00:20:04.362888 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:20:04.389195 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:20:04.393200 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:20:04.399476 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:20:04.399586 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:20:04.408676 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:20:04.408676 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:20:04.410700 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:20:04.413408 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:20:04.415403 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:20:04.422198 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:20:04.453050 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:20:04.453180 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:20:04.454282 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:20:04.455702 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:20:04.456974 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:20:04.463163 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:20:04.477034 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:20:04.485210 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:20:04.494100 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:20:04.495431 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:20:04.496089 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:20:04.496763 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:20:04.496915 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:20:04.498472 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:20:04.499129 systemd[1]: Stopped target basic.target - Basic System. May 8 00:20:04.500153 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:20:04.501381 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:20:04.502535 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:20:04.503634 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:20:04.505036 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:20:04.506466 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:20:04.507685 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:20:04.509085 systemd[1]: Stopped target swap.target - Swaps. May 8 00:20:04.510414 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:20:04.510519 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:20:04.512036 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:20:04.512808 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:20:04.513819 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:20:04.513913 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:20:04.515029 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:20:04.515137 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:20:04.516771 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:20:04.516879 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:20:04.517663 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:20:04.517780 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:20:04.528824 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:20:04.533223 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:20:04.538100 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:20:04.538236 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:20:04.539160 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:20:04.539306 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:20:04.554339 ignition[997]: INFO : Ignition 2.20.0 May 8 00:20:04.557785 ignition[997]: INFO : Stage: umount May 8 00:20:04.557785 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:20:04.557785 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 8 00:20:04.557785 ignition[997]: INFO : umount: umount passed May 8 00:20:04.557785 ignition[997]: INFO : Ignition finished successfully May 8 00:20:04.555898 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:20:04.556033 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:20:04.560574 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:20:04.560684 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:20:04.567680 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:20:04.567732 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:20:04.568594 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:20:04.568644 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:20:04.569407 systemd[1]: ignition-fetch.service: Deactivated successfully. May 8 00:20:04.569455 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 8 00:20:04.569962 systemd[1]: Stopped target network.target - Network. May 8 00:20:04.572323 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:20:04.572376 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:20:04.572996 systemd[1]: Stopped target paths.target - Path Units. May 8 00:20:04.573513 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:20:04.579076 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:20:04.597587 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:20:04.598645 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:20:04.599720 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:20:04.599768 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:20:04.600890 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:20:04.600931 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:20:04.602118 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:20:04.602175 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:20:04.603189 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:20:04.603238 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:20:04.604496 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:20:04.605777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:20:04.608152 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:20:04.608752 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:20:04.608868 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:20:04.611030 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:20:04.611120 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:20:04.613622 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:20:04.613744 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:20:04.617636 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 8 00:20:04.618046 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:20:04.618181 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:20:04.619973 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 8 00:20:04.621384 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:20:04.621433 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:20:04.629151 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:20:04.629686 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:20:04.629745 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:20:04.630349 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:20:04.630398 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:20:04.632158 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:20:04.632222 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:20:04.632969 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:20:04.633059 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:20:04.634613 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:20:04.639908 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:20:04.639985 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 8 00:20:04.649600 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:20:04.649738 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:20:04.657936 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:20:04.658178 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:20:04.659568 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:20:04.659618 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:20:04.660492 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:20:04.660531 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:20:04.661694 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:20:04.661744 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:20:04.663443 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:20:04.663492 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:20:04.664712 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:20:04.664760 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:20:04.671182 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:20:04.672584 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:20:04.672645 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:20:04.673332 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:20:04.673381 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:20:04.673958 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:20:04.674028 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:20:04.676425 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:20:04.676479 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:20:04.678323 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:20:04.678390 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 8 00:20:04.678733 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:20:04.678834 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:20:04.680427 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:20:04.688152 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:20:04.695273 systemd[1]: Switching root. May 8 00:20:04.724556 systemd-journald[178]: Journal stopped May 8 00:20:05.808746 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). May 8 00:20:05.808770 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:20:05.808783 kernel: SELinux: policy capability open_perms=1 May 8 00:20:05.808792 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:20:05.808801 kernel: SELinux: policy capability always_check_network=0 May 8 00:20:05.808814 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:20:05.808825 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:20:05.808834 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:20:05.808842 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:20:05.808851 kernel: audit: type=1403 audit(1746663604.844:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:20:05.808861 systemd[1]: Successfully loaded SELinux policy in 50.019ms. May 8 00:20:05.808875 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.882ms. May 8 00:20:05.808885 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 8 00:20:05.808895 systemd[1]: Detected virtualization kvm. May 8 00:20:05.808905 systemd[1]: Detected architecture x86-64. May 8 00:20:05.808915 systemd[1]: Detected first boot. May 8 00:20:05.808928 systemd[1]: Initializing machine ID from random generator. May 8 00:20:05.808938 zram_generator::config[1042]: No configuration found. May 8 00:20:05.808948 kernel: Guest personality initialized and is inactive May 8 00:20:05.808958 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:20:05.808967 kernel: Initialized host personality May 8 00:20:05.808976 kernel: NET: Registered PF_VSOCK protocol family May 8 00:20:05.808985 systemd[1]: Populated /etc with preset unit settings. May 8 00:20:05.808998 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 8 00:20:05.811036 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:20:05.811053 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:20:05.811084 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:20:05.811095 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:20:05.811105 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:20:05.811115 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:20:05.811129 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:20:05.811139 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:20:05.811165 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:20:05.811175 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:20:05.811185 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:20:05.811207 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:20:05.811228 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:20:05.811238 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:20:05.811258 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:20:05.811273 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:20:05.811287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:20:05.811297 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:20:05.811308 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:20:05.811318 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:20:05.811328 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:20:05.811339 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:20:05.811354 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:20:05.811364 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:20:05.811374 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:20:05.811384 systemd[1]: Reached target slices.target - Slice Units. May 8 00:20:05.811394 systemd[1]: Reached target swap.target - Swaps. May 8 00:20:05.811404 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:20:05.811414 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:20:05.811424 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 8 00:20:05.811434 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:20:05.811447 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:20:05.811458 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:20:05.811468 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:20:05.811478 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:20:05.811491 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:20:05.811501 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:20:05.811511 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:20:05.811521 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:20:05.811531 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:20:05.811541 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:20:05.811552 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:20:05.811562 systemd[1]: Reached target machines.target - Containers. May 8 00:20:05.811577 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:20:05.811587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:20:05.811597 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:20:05.811607 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:20:05.811617 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:20:05.811627 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:20:05.811637 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:20:05.811648 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:20:05.811658 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:20:05.811671 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:20:05.811682 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:20:05.811692 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:20:05.811702 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:20:05.811712 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:20:05.811723 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:20:05.811733 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:20:05.811743 kernel: fuse: init (API version 7.39) May 8 00:20:05.811756 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:20:05.811766 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:20:05.811776 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:20:05.811786 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 8 00:20:05.811798 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:20:05.811829 systemd-journald[1123]: Collecting audit messages is disabled. May 8 00:20:05.811853 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:20:05.811864 kernel: loop: module loaded May 8 00:20:05.811874 systemd[1]: Stopped verity-setup.service. May 8 00:20:05.811885 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:20:05.811895 systemd-journald[1123]: Journal started May 8 00:20:05.811918 systemd-journald[1123]: Runtime Journal (/run/log/journal/469e3251eb2d4e108720cbe1f3ba985d) is 8M, max 78.3M, 70.3M free. May 8 00:20:05.486841 systemd[1]: Queued start job for default target multi-user.target. May 8 00:20:05.500169 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 8 00:20:05.500728 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:20:05.828044 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:20:05.829796 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:20:05.832873 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:20:05.833600 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:20:05.834751 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:20:05.836431 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:20:05.837109 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:20:05.837969 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:20:05.839516 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:20:05.839738 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:20:05.843533 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:20:05.843750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:20:05.844066 kernel: ACPI: bus type drm_connector registered May 8 00:20:05.844619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:20:05.844838 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:20:05.845942 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:20:05.846196 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:20:05.847147 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:20:05.847952 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:20:05.848208 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:20:05.849587 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:20:05.850508 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:20:05.854725 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:20:05.855631 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:20:05.862362 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:20:05.869057 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 8 00:20:05.875822 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:20:05.887036 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:20:05.896123 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:20:05.896698 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:20:05.896736 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:20:05.900157 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 8 00:20:05.905101 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:20:05.913454 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:20:05.914637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:20:05.921196 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:20:05.933187 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:20:05.933794 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:20:05.937169 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:20:05.939095 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:20:05.941335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:20:05.948233 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:20:05.950598 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:20:05.955236 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:20:05.956222 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:20:05.958379 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:20:05.980066 systemd-journald[1123]: Time spent on flushing to /var/log/journal/469e3251eb2d4e108720cbe1f3ba985d is 48.892ms for 994 entries. May 8 00:20:05.980066 systemd-journald[1123]: System Journal (/var/log/journal/469e3251eb2d4e108720cbe1f3ba985d) is 8M, max 195.6M, 187.6M free. May 8 00:20:06.047385 systemd-journald[1123]: Received client request to flush runtime journal. May 8 00:20:06.047423 kernel: loop0: detected capacity change from 0 to 210664 May 8 00:20:05.994858 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:20:06.006519 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:20:06.017563 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:20:06.020724 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:20:06.033444 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 8 00:20:06.049517 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:20:06.051989 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. May 8 00:20:06.052180 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. May 8 00:20:06.065644 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:20:06.077341 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:20:06.079331 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 8 00:20:06.092115 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:20:06.086095 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:20:06.096420 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:20:06.119045 kernel: loop1: detected capacity change from 0 to 8 May 8 00:20:06.142255 kernel: loop2: detected capacity change from 0 to 138176 May 8 00:20:06.148189 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:20:06.157175 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:20:06.184781 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. May 8 00:20:06.185191 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. May 8 00:20:06.191462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:20:06.211063 kernel: loop3: detected capacity change from 0 to 147912 May 8 00:20:06.256059 kernel: loop4: detected capacity change from 0 to 210664 May 8 00:20:06.283277 kernel: loop5: detected capacity change from 0 to 8 May 8 00:20:06.286229 kernel: loop6: detected capacity change from 0 to 138176 May 8 00:20:06.309944 kernel: loop7: detected capacity change from 0 to 147912 May 8 00:20:06.326595 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 8 00:20:06.327552 (sd-merge)[1197]: Merged extensions into '/usr'. May 8 00:20:06.335227 systemd[1]: Reload requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:20:06.335359 systemd[1]: Reloading... May 8 00:20:06.459767 zram_generator::config[1226]: No configuration found. May 8 00:20:06.525696 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:20:06.592456 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:20:06.652157 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:20:06.652354 systemd[1]: Reloading finished in 316 ms. May 8 00:20:06.680660 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:20:06.681922 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:20:06.696192 systemd[1]: Starting ensure-sysext.service... May 8 00:20:06.700221 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:20:06.718359 systemd[1]: Reload requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... May 8 00:20:06.718370 systemd[1]: Reloading... May 8 00:20:06.763629 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:20:06.766314 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:20:06.767417 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:20:06.767720 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 8 00:20:06.768416 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. May 8 00:20:06.775506 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:20:06.777212 systemd-tmpfiles[1269]: Skipping /boot May 8 00:20:06.794434 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:20:06.794499 systemd-tmpfiles[1269]: Skipping /boot May 8 00:20:06.831038 zram_generator::config[1302]: No configuration found. May 8 00:20:06.940547 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:20:06.998659 systemd[1]: Reloading finished in 279 ms. May 8 00:20:07.015763 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:20:07.027146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:20:07.040295 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:20:07.043357 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:20:07.053217 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:20:07.061228 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:20:07.067418 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:20:07.075262 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:20:07.080922 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:20:07.081110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:20:07.086672 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:20:07.095406 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:20:07.104554 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:20:07.105666 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:20:07.105782 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:20:07.114082 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:20:07.116313 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:20:07.119789 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:20:07.136228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:20:07.136502 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:20:07.137605 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:20:07.137839 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:20:07.144882 systemd-udevd[1352]: Using default interface naming scheme 'v255'. May 8 00:20:07.146403 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:20:07.146627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:20:07.148390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:20:07.153231 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:20:07.154227 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:20:07.154318 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:20:07.158246 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:20:07.160060 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:20:07.161281 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:20:07.171587 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:20:07.171779 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:20:07.180596 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:20:07.181719 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:20:07.181872 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 8 00:20:07.182111 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:20:07.195201 systemd[1]: Finished ensure-sysext.service. May 8 00:20:07.217299 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:20:07.218949 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:20:07.219657 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:20:07.222417 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:20:07.223359 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:20:07.223652 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:20:07.225504 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:20:07.225955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:20:07.227522 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:20:07.250168 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:20:07.252072 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:20:07.252120 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:20:07.253896 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:20:07.255209 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:20:07.256974 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:20:07.258954 augenrules[1391]: No rules May 8 00:20:07.262411 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:20:07.262979 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:20:07.265647 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:20:07.265894 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:20:07.346461 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:20:07.411164 systemd-networkd[1396]: lo: Link UP May 8 00:20:07.411184 systemd-networkd[1396]: lo: Gained carrier May 8 00:20:07.412595 systemd-networkd[1396]: Enumeration completed May 8 00:20:07.412735 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:20:07.413511 systemd-resolved[1351]: Positive Trust Anchors: May 8 00:20:07.415066 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:20:07.415097 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:20:07.425224 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 8 00:20:07.429181 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:20:07.436145 systemd-resolved[1351]: Defaulting to hostname 'linux'. May 8 00:20:07.441893 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:20:07.442717 systemd[1]: Reached target network.target - Network. May 8 00:20:07.444153 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:20:07.446884 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:20:07.446897 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:20:07.448409 systemd-networkd[1396]: eth0: Link UP May 8 00:20:07.448418 systemd-networkd[1396]: eth0: Gained carrier May 8 00:20:07.448432 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:20:07.452359 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:20:07.454118 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:20:07.465295 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 8 00:20:07.516385 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1401) May 8 00:20:07.523088 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:20:07.554050 kernel: ACPI: button: Power Button [PWRF] May 8 00:20:07.577064 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:20:07.585035 kernel: EDAC MC: Ver: 3.0.0 May 8 00:20:07.598039 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:20:07.605255 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:20:07.605456 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:20:07.624534 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 8 00:20:07.640665 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:20:07.647635 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:20:07.653073 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:20:07.663149 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:20:07.666686 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:20:07.673307 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:20:07.687619 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:20:07.718527 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:20:07.720321 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:20:07.775221 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:20:07.776399 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:20:07.778569 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:20:07.779260 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:20:07.780231 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:20:07.780889 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:20:07.781033 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:20:07.781876 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:20:07.782525 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:20:07.783095 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:20:07.783124 systemd[1]: Reached target paths.target - Path Units. May 8 00:20:07.783600 systemd[1]: Reached target timers.target - Timer Units. May 8 00:20:07.787177 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:20:07.789318 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:20:07.792799 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 8 00:20:07.796473 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 8 00:20:07.797075 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 8 00:20:07.808005 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:20:07.808932 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 8 00:20:07.810264 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:20:07.811095 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:20:07.812296 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:20:07.812816 systemd[1]: Reached target basic.target - Basic System. May 8 00:20:07.813399 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:20:07.813438 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:20:07.818188 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:20:07.820176 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 00:20:07.825259 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:20:07.830382 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:20:07.840172 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:20:07.840857 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:20:07.851349 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:20:07.855141 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:20:07.864197 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:20:07.872527 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:20:07.883822 jq[1457]: false May 8 00:20:07.884185 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:20:07.885537 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:20:07.885969 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:20:07.888268 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:20:07.894915 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:20:07.906361 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:20:07.906625 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:20:07.909858 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:20:07.910318 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:20:07.915092 systemd-networkd[1396]: eth0: DHCPv4 address 172.232.0.241/24, gateway 172.232.0.1 acquired from 23.210.200.54 May 8 00:20:07.919261 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. May 8 00:20:07.921388 extend-filesystems[1458]: Found loop4 May 8 00:20:07.921388 extend-filesystems[1458]: Found loop5 May 8 00:20:07.921388 extend-filesystems[1458]: Found loop6 May 8 00:20:07.921388 extend-filesystems[1458]: Found loop7 May 8 00:20:07.921388 extend-filesystems[1458]: Found sda May 8 00:20:07.921388 extend-filesystems[1458]: Found sda1 May 8 00:20:07.921388 extend-filesystems[1458]: Found sda2 May 8 00:20:07.921388 extend-filesystems[1458]: Found sda3 May 8 00:20:07.921388 extend-filesystems[1458]: Found usr May 8 00:20:07.921388 extend-filesystems[1458]: Found sda4 May 8 00:20:07.921388 extend-filesystems[1458]: Found sda6 May 8 00:20:07.921388 extend-filesystems[1458]: Found sda7 May 8 00:20:07.921388 extend-filesystems[1458]: Found sda9 May 8 00:20:07.921388 extend-filesystems[1458]: Checking size of /dev/sda9 May 8 00:20:08.481987 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks May 8 00:20:08.482058 coreos-metadata[1455]: May 08 00:20:08.006 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 8 00:20:07.957467 dbus-daemon[1456]: [system] SELinux support is enabled May 8 00:20:07.935904 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:20:08.485256 extend-filesystems[1458]: Resized partition /dev/sda9 May 8 00:20:07.969559 dbus-daemon[1456]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1396 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 8 00:20:07.936850 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:20:08.492487 extend-filesystems[1495]: resize2fs 1.47.1 (20-May-2024) May 8 00:20:08.450346 dbus-daemon[1456]: [system] Successfully activated service 'org.freedesktop.systemd1' May 8 00:20:07.959657 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:20:07.960168 (ntainerd)[1489]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:20:08.498909 update_engine[1468]: I20250508 00:20:07.952872 1468 main.cc:92] Flatcar Update Engine starting May 8 00:20:08.498909 update_engine[1468]: I20250508 00:20:07.971418 1468 update_check_scheduler.cc:74] Next update check in 2m53s May 8 00:20:08.445636 systemd-resolved[1351]: Clock change detected. Flushing caches. May 8 00:20:08.499215 tar[1478]: linux-amd64/helm May 8 00:20:08.445769 systemd-timesyncd[1379]: Contacted time server 23.150.41.123:123 (0.flatcar.pool.ntp.org). May 8 00:20:08.502626 jq[1470]: true May 8 00:20:08.445817 systemd-timesyncd[1379]: Initial clock synchronization to Thu 2025-05-08 00:20:08.445595 UTC. May 8 00:20:08.450491 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:20:08.505726 jq[1496]: true May 8 00:20:08.450524 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:20:08.451191 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:20:08.451206 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:20:08.451923 systemd[1]: Started update-engine.service - Update Engine. May 8 00:20:08.466610 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 8 00:20:08.481608 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:20:08.538547 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (1408) May 8 00:20:08.632606 coreos-metadata[1455]: May 08 00:20:08.631 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 8 00:20:08.643479 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:20:08.666032 bash[1517]: Updated "/home/core/.ssh/authorized_keys" May 8 00:20:08.667516 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:20:08.678486 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:20:08.692605 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:20:08.700676 systemd-logind[1466]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:20:08.702570 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:20:08.703839 systemd[1]: Starting sshkeys.service... May 8 00:20:08.705362 systemd-logind[1466]: New seat seat0. May 8 00:20:08.714001 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:20:08.753542 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 8 00:20:08.762690 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 8 00:20:08.773018 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:20:08.773284 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:20:08.783824 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:20:08.826886 kernel: EXT4-fs (sda9): resized filesystem to 20360187 May 8 00:20:08.849590 extend-filesystems[1495]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 8 00:20:08.849590 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 10 May 8 00:20:08.849590 extend-filesystems[1495]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. May 8 00:20:08.861624 extend-filesystems[1458]: Resized filesystem in /dev/sda9 May 8 00:20:08.850673 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:20:08.850968 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:20:08.860201 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:20:08.872864 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:20:08.882765 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:20:08.884716 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:20:08.912913 coreos-metadata[1535]: May 08 00:20:08.912 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 8 00:20:08.922008 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 8 00:20:08.924519 dbus-daemon[1456]: [system] Successfully activated service 'org.freedesktop.hostname1' May 8 00:20:08.925477 dbus-daemon[1456]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1497 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 8 00:20:08.932702 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:20:08.936653 systemd[1]: Starting polkit.service - Authorization Manager... May 8 00:20:08.947666 polkitd[1552]: Started polkitd version 121 May 8 00:20:08.954644 polkitd[1552]: Loading rules from directory /etc/polkit-1/rules.d May 8 00:20:08.954725 polkitd[1552]: Loading rules from directory /usr/share/polkit-1/rules.d May 8 00:20:08.958166 containerd[1489]: time="2025-05-08T00:20:08.958094077Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 00:20:08.959781 polkitd[1552]: Finished loading, compiling and executing 2 rules May 8 00:20:08.961970 dbus-daemon[1456]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 8 00:20:08.962081 systemd[1]: Started polkit.service - Authorization Manager. May 8 00:20:08.963660 polkitd[1552]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 8 00:20:08.980107 systemd-hostnamed[1497]: Hostname set to <172-232-0-241> (transient) May 8 00:20:08.980245 systemd-resolved[1351]: System hostname changed to '172-232-0-241'. May 8 00:20:08.986469 containerd[1489]: time="2025-05-08T00:20:08.984824300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:20:08.986774 containerd[1489]: time="2025-05-08T00:20:08.986751314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:20:08.986828 containerd[1489]: time="2025-05-08T00:20:08.986815184Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:20:08.986891 containerd[1489]: time="2025-05-08T00:20:08.986877844Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:20:08.987083 containerd[1489]: time="2025-05-08T00:20:08.987067735Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:20:08.987135 containerd[1489]: time="2025-05-08T00:20:08.987124335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:20:08.987237 containerd[1489]: time="2025-05-08T00:20:08.987221175Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:20:08.987332 containerd[1489]: time="2025-05-08T00:20:08.987319245Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:20:08.987626 containerd[1489]: time="2025-05-08T00:20:08.987603656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:20:08.987680 containerd[1489]: time="2025-05-08T00:20:08.987668426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:20:08.987723 containerd[1489]: time="2025-05-08T00:20:08.987711446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:20:08.987775 containerd[1489]: time="2025-05-08T00:20:08.987763426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:20:08.987910 containerd[1489]: time="2025-05-08T00:20:08.987895656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:20:08.988220 containerd[1489]: time="2025-05-08T00:20:08.988204677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:20:08.988423 containerd[1489]: time="2025-05-08T00:20:08.988405987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:20:08.988504 containerd[1489]: time="2025-05-08T00:20:08.988491588Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:20:08.988643 containerd[1489]: time="2025-05-08T00:20:08.988627368Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:20:08.988742 containerd[1489]: time="2025-05-08T00:20:08.988728608Z" level=info msg="metadata content store policy set" policy=shared May 8 00:20:08.991189 containerd[1489]: time="2025-05-08T00:20:08.991171403Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:20:08.991281 containerd[1489]: time="2025-05-08T00:20:08.991268213Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:20:08.991594 containerd[1489]: time="2025-05-08T00:20:08.991572274Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:20:08.991649 containerd[1489]: time="2025-05-08T00:20:08.991637584Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:20:08.991701 containerd[1489]: time="2025-05-08T00:20:08.991689224Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:20:08.991851 containerd[1489]: time="2025-05-08T00:20:08.991835174Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:20:08.993579 containerd[1489]: time="2025-05-08T00:20:08.993559848Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:20:08.993740 containerd[1489]: time="2025-05-08T00:20:08.993724528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:20:08.993809 containerd[1489]: time="2025-05-08T00:20:08.993795238Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.993845338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.993860548Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.993871698Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.993882388Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.993895128Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.993907338Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.993918458Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.993928898Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.993939078Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.993956048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.993968968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.993979969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.993990749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994182 containerd[1489]: time="2025-05-08T00:20:08.994001739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994416 containerd[1489]: time="2025-05-08T00:20:08.994016799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994416 containerd[1489]: time="2025-05-08T00:20:08.994026709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994416 containerd[1489]: time="2025-05-08T00:20:08.994037489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994416 containerd[1489]: time="2025-05-08T00:20:08.994048719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994416 containerd[1489]: time="2025-05-08T00:20:08.994060649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994416 containerd[1489]: time="2025-05-08T00:20:08.994070219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994416 containerd[1489]: time="2025-05-08T00:20:08.994079729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994416 containerd[1489]: time="2025-05-08T00:20:08.994089319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994416 containerd[1489]: time="2025-05-08T00:20:08.994100719Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:20:08.994416 containerd[1489]: time="2025-05-08T00:20:08.994116849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994416 containerd[1489]: time="2025-05-08T00:20:08.994128189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:20:08.994416 containerd[1489]: time="2025-05-08T00:20:08.994137049Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:20:08.995519 containerd[1489]: time="2025-05-08T00:20:08.994640070Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:20:08.995519 containerd[1489]: time="2025-05-08T00:20:08.994661070Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:20:08.995519 containerd[1489]: time="2025-05-08T00:20:08.994670670Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:20:08.995519 containerd[1489]: time="2025-05-08T00:20:08.994743550Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:20:08.995519 containerd[1489]: time="2025-05-08T00:20:08.994754080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:20:08.995519 containerd[1489]: time="2025-05-08T00:20:08.994764860Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:20:08.995519 containerd[1489]: time="2025-05-08T00:20:08.994773580Z" level=info msg="NRI interface is disabled by configuration." May 8 00:20:08.995519 containerd[1489]: time="2025-05-08T00:20:08.994782690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:20:08.995683 containerd[1489]: time="2025-05-08T00:20:08.994993631Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:20:08.995683 containerd[1489]: time="2025-05-08T00:20:08.995029921Z" level=info msg="Connect containerd service" May 8 00:20:08.995683 containerd[1489]: time="2025-05-08T00:20:08.995055011Z" level=info msg="using legacy CRI server" May 8 00:20:08.995683 containerd[1489]: time="2025-05-08T00:20:08.995061071Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:20:08.995683 containerd[1489]: time="2025-05-08T00:20:08.995155841Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:20:08.996539 containerd[1489]: time="2025-05-08T00:20:08.996517854Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:20:08.996719 containerd[1489]: time="2025-05-08T00:20:08.996686294Z" level=info msg="Start subscribing containerd event" May 8 00:20:08.996781 containerd[1489]: time="2025-05-08T00:20:08.996768494Z" level=info msg="Start recovering state" May 8 00:20:08.996867 containerd[1489]: time="2025-05-08T00:20:08.996854884Z" level=info msg="Start event monitor" May 8 00:20:08.997047 containerd[1489]: time="2025-05-08T00:20:08.997033525Z" level=info msg="Start snapshots syncer" May 8 00:20:08.997110 containerd[1489]: time="2025-05-08T00:20:08.997097805Z" level=info msg="Start cni network conf syncer for default" May 8 00:20:08.997150 containerd[1489]: time="2025-05-08T00:20:08.997140355Z" level=info msg="Start streaming server" May 8 00:20:08.997380 containerd[1489]: time="2025-05-08T00:20:08.997365105Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:20:08.997496 containerd[1489]: time="2025-05-08T00:20:08.997482426Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:20:09.003507 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:20:09.005196 containerd[1489]: time="2025-05-08T00:20:09.004188219Z" level=info msg="containerd successfully booted in 0.047191s" May 8 00:20:09.007598 coreos-metadata[1455]: May 08 00:20:09.007 INFO Fetch successful May 8 00:20:09.007790 coreos-metadata[1455]: May 08 00:20:09.007 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 8 00:20:09.018685 coreos-metadata[1535]: May 08 00:20:09.018 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 8 00:20:09.125665 systemd-networkd[1396]: eth0: Gained IPv6LL May 8 00:20:09.133298 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:20:09.135021 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:20:09.144638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:20:09.148074 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:20:09.193371 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:20:09.198345 coreos-metadata[1535]: May 08 00:20:09.197 INFO Fetch successful May 8 00:20:09.214977 update-ssh-keys[1578]: Updated "/home/core/.ssh/authorized_keys" May 8 00:20:09.219368 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 8 00:20:09.224110 tar[1478]: linux-amd64/LICENSE May 8 00:20:09.224110 tar[1478]: linux-amd64/README.md May 8 00:20:09.232054 systemd[1]: Finished sshkeys.service. May 8 00:20:09.238048 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:20:09.262354 coreos-metadata[1455]: May 08 00:20:09.262 INFO Fetch successful May 8 00:20:09.350588 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 00:20:09.352020 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:20:09.979661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:20:09.981434 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:20:09.982935 systemd[1]: Startup finished in 810ms (kernel) + 7.148s (initrd) + 4.748s (userspace) = 12.707s. May 8 00:20:09.983939 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:20:10.535207 kubelet[1609]: E0508 00:20:10.535129 1609 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:20:10.539237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:20:10.539486 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:20:10.540058 systemd[1]: kubelet.service: Consumed 849ms CPU time, 244.8M memory peak. May 8 00:20:12.144925 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:20:12.155078 systemd[1]: Started sshd@0-172.232.0.241:22-139.178.89.65:57602.service - OpenSSH per-connection server daemon (139.178.89.65:57602). May 8 00:20:12.490253 sshd[1622]: Accepted publickey for core from 139.178.89.65 port 57602 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:20:12.492308 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:12.502929 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:20:12.508667 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:20:12.511175 systemd-logind[1466]: New session 1 of user core. May 8 00:20:12.522289 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:20:12.528698 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:20:12.540300 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:20:12.543039 systemd-logind[1466]: New session c1 of user core. May 8 00:20:12.678720 systemd[1626]: Queued start job for default target default.target. May 8 00:20:12.685829 systemd[1626]: Created slice app.slice - User Application Slice. May 8 00:20:12.685857 systemd[1626]: Reached target paths.target - Paths. May 8 00:20:12.685901 systemd[1626]: Reached target timers.target - Timers. May 8 00:20:12.687602 systemd[1626]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:20:12.699941 systemd[1626]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:20:12.700070 systemd[1626]: Reached target sockets.target - Sockets. May 8 00:20:12.700108 systemd[1626]: Reached target basic.target - Basic System. May 8 00:20:12.700150 systemd[1626]: Reached target default.target - Main User Target. May 8 00:20:12.700181 systemd[1626]: Startup finished in 149ms. May 8 00:20:12.700821 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:20:12.711665 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:20:12.982760 systemd[1]: Started sshd@1-172.232.0.241:22-139.178.89.65:57604.service - OpenSSH per-connection server daemon (139.178.89.65:57604). May 8 00:20:13.305671 sshd[1637]: Accepted publickey for core from 139.178.89.65 port 57604 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:20:13.307244 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:13.311721 systemd-logind[1466]: New session 2 of user core. May 8 00:20:13.327593 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:20:13.548894 sshd[1639]: Connection closed by 139.178.89.65 port 57604 May 8 00:20:13.549572 sshd-session[1637]: pam_unix(sshd:session): session closed for user core May 8 00:20:13.553021 systemd[1]: sshd@1-172.232.0.241:22-139.178.89.65:57604.service: Deactivated successfully. May 8 00:20:13.555173 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:20:13.556520 systemd-logind[1466]: Session 2 logged out. Waiting for processes to exit. May 8 00:20:13.557946 systemd-logind[1466]: Removed session 2. May 8 00:20:13.628690 systemd[1]: Started sshd@2-172.232.0.241:22-139.178.89.65:57608.service - OpenSSH per-connection server daemon (139.178.89.65:57608). May 8 00:20:13.957381 sshd[1645]: Accepted publickey for core from 139.178.89.65 port 57608 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:20:13.959216 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:13.964078 systemd-logind[1466]: New session 3 of user core. May 8 00:20:13.973713 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:20:14.198971 sshd[1647]: Connection closed by 139.178.89.65 port 57608 May 8 00:20:14.200917 sshd-session[1645]: pam_unix(sshd:session): session closed for user core May 8 00:20:14.206598 systemd[1]: sshd@2-172.232.0.241:22-139.178.89.65:57608.service: Deactivated successfully. May 8 00:20:14.208837 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:20:14.209919 systemd-logind[1466]: Session 3 logged out. Waiting for processes to exit. May 8 00:20:14.210954 systemd-logind[1466]: Removed session 3. May 8 00:20:14.263670 systemd[1]: Started sshd@3-172.232.0.241:22-139.178.89.65:57610.service - OpenSSH per-connection server daemon (139.178.89.65:57610). May 8 00:20:14.594629 sshd[1653]: Accepted publickey for core from 139.178.89.65 port 57610 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:20:14.596485 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:14.602236 systemd-logind[1466]: New session 4 of user core. May 8 00:20:14.609603 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:20:14.846321 sshd[1655]: Connection closed by 139.178.89.65 port 57610 May 8 00:20:14.847148 sshd-session[1653]: pam_unix(sshd:session): session closed for user core May 8 00:20:14.851712 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. May 8 00:20:14.852583 systemd[1]: sshd@3-172.232.0.241:22-139.178.89.65:57610.service: Deactivated successfully. May 8 00:20:14.855278 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:20:14.856171 systemd-logind[1466]: Removed session 4. May 8 00:20:14.915859 systemd[1]: Started sshd@4-172.232.0.241:22-139.178.89.65:57614.service - OpenSSH per-connection server daemon (139.178.89.65:57614). May 8 00:20:15.247641 sshd[1661]: Accepted publickey for core from 139.178.89.65 port 57614 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:20:15.249585 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:15.254798 systemd-logind[1466]: New session 5 of user core. May 8 00:20:15.257598 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:20:15.456944 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:20:15.457287 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:20:15.474909 sudo[1664]: pam_unix(sudo:session): session closed for user root May 8 00:20:15.526899 sshd[1663]: Connection closed by 139.178.89.65 port 57614 May 8 00:20:15.528012 sshd-session[1661]: pam_unix(sshd:session): session closed for user core May 8 00:20:15.531305 systemd[1]: sshd@4-172.232.0.241:22-139.178.89.65:57614.service: Deactivated successfully. May 8 00:20:15.533306 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:20:15.534743 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. May 8 00:20:15.535941 systemd-logind[1466]: Removed session 5. May 8 00:20:15.594642 systemd[1]: Started sshd@5-172.232.0.241:22-139.178.89.65:57620.service - OpenSSH per-connection server daemon (139.178.89.65:57620). May 8 00:20:15.928485 sshd[1670]: Accepted publickey for core from 139.178.89.65 port 57620 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:20:15.930061 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:15.935571 systemd-logind[1466]: New session 6 of user core. May 8 00:20:15.942577 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:20:16.130542 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:20:16.130871 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:20:16.134574 sudo[1674]: pam_unix(sudo:session): session closed for user root May 8 00:20:16.140007 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 00:20:16.140311 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:20:16.159701 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 00:20:16.187844 augenrules[1696]: No rules May 8 00:20:16.191173 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:20:16.191666 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 00:20:16.192997 sudo[1673]: pam_unix(sudo:session): session closed for user root May 8 00:20:16.244241 sshd[1672]: Connection closed by 139.178.89.65 port 57620 May 8 00:20:16.244788 sshd-session[1670]: pam_unix(sshd:session): session closed for user core May 8 00:20:16.247628 systemd[1]: sshd@5-172.232.0.241:22-139.178.89.65:57620.service: Deactivated successfully. May 8 00:20:16.249523 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:20:16.250812 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. May 8 00:20:16.251969 systemd-logind[1466]: Removed session 6. May 8 00:20:16.308674 systemd[1]: Started sshd@6-172.232.0.241:22-139.178.89.65:57630.service - OpenSSH per-connection server daemon (139.178.89.65:57630). May 8 00:20:16.635618 sshd[1705]: Accepted publickey for core from 139.178.89.65 port 57630 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:20:16.637654 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:20:16.642731 systemd-logind[1466]: New session 7 of user core. May 8 00:20:16.650713 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:20:16.833491 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:20:16.833833 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:20:17.103666 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:20:17.103808 (dockerd)[1724]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:20:17.365185 dockerd[1724]: time="2025-05-08T00:20:17.364760516Z" level=info msg="Starting up" May 8 00:20:17.454885 dockerd[1724]: time="2025-05-08T00:20:17.454837416Z" level=info msg="Loading containers: start." May 8 00:20:17.615490 kernel: Initializing XFRM netlink socket May 8 00:20:17.703511 systemd-networkd[1396]: docker0: Link UP May 8 00:20:17.737095 dockerd[1724]: time="2025-05-08T00:20:17.737052111Z" level=info msg="Loading containers: done." May 8 00:20:17.750742 dockerd[1724]: time="2025-05-08T00:20:17.750234747Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:20:17.750742 dockerd[1724]: time="2025-05-08T00:20:17.750309347Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 8 00:20:17.750742 dockerd[1724]: time="2025-05-08T00:20:17.750414477Z" level=info msg="Daemon has completed initialization" May 8 00:20:17.781034 dockerd[1724]: time="2025-05-08T00:20:17.780988379Z" level=info msg="API listen on /run/docker.sock" May 8 00:20:17.781558 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:20:18.573828 containerd[1489]: time="2025-05-08T00:20:18.573763234Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:20:19.358351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824864507.mount: Deactivated successfully. May 8 00:20:20.790137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:20:20.801671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:20:20.965634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:20:20.974725 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:20:21.023391 kubelet[1978]: E0508 00:20:21.022979 1978 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:20:21.029565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:20:21.029760 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:20:21.030202 systemd[1]: kubelet.service: Consumed 187ms CPU time, 93.7M memory peak. May 8 00:20:21.471040 containerd[1489]: time="2025-05-08T00:20:21.470981567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:21.471972 containerd[1489]: time="2025-05-08T00:20:21.471923959Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 8 00:20:21.472476 containerd[1489]: time="2025-05-08T00:20:21.472397640Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:21.476811 containerd[1489]: time="2025-05-08T00:20:21.475719766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:21.476811 containerd[1489]: time="2025-05-08T00:20:21.476621468Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.902814694s" May 8 00:20:21.476811 containerd[1489]: time="2025-05-08T00:20:21.476651128Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 8 00:20:21.499892 containerd[1489]: time="2025-05-08T00:20:21.499858295Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:20:23.703381 containerd[1489]: time="2025-05-08T00:20:23.703332971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:23.704293 containerd[1489]: time="2025-05-08T00:20:23.704254862Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 8 00:20:23.706317 containerd[1489]: time="2025-05-08T00:20:23.704828054Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:23.708584 containerd[1489]: time="2025-05-08T00:20:23.707312518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:23.708584 containerd[1489]: time="2025-05-08T00:20:23.708462451Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.208371466s" May 8 00:20:23.708584 containerd[1489]: time="2025-05-08T00:20:23.708496121Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 8 00:20:23.731851 containerd[1489]: time="2025-05-08T00:20:23.731829238Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:20:25.371872 containerd[1489]: time="2025-05-08T00:20:25.371809807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:25.372752 containerd[1489]: time="2025-05-08T00:20:25.372719359Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 8 00:20:25.373316 containerd[1489]: time="2025-05-08T00:20:25.373270550Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:25.375537 containerd[1489]: time="2025-05-08T00:20:25.375498444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:25.376579 containerd[1489]: time="2025-05-08T00:20:25.376435096Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.644473138s" May 8 00:20:25.376579 containerd[1489]: time="2025-05-08T00:20:25.376479926Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 8 00:20:25.400918 containerd[1489]: time="2025-05-08T00:20:25.400808075Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:20:26.659170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2884619276.mount: Deactivated successfully. May 8 00:20:27.021777 containerd[1489]: time="2025-05-08T00:20:27.020832654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:27.021777 containerd[1489]: time="2025-05-08T00:20:27.021418865Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 8 00:20:27.021777 containerd[1489]: time="2025-05-08T00:20:27.021625336Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:27.023687 containerd[1489]: time="2025-05-08T00:20:27.023665790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:27.024090 containerd[1489]: time="2025-05-08T00:20:27.024040250Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.623076275s" May 8 00:20:27.024143 containerd[1489]: time="2025-05-08T00:20:27.024088411Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 8 00:20:27.053002 containerd[1489]: time="2025-05-08T00:20:27.052952908Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:20:27.790976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800187881.mount: Deactivated successfully. May 8 00:20:28.393675 containerd[1489]: time="2025-05-08T00:20:28.393611829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:28.394910 containerd[1489]: time="2025-05-08T00:20:28.394871901Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:20:28.395618 containerd[1489]: time="2025-05-08T00:20:28.395376552Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:28.398759 containerd[1489]: time="2025-05-08T00:20:28.398095558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:28.400036 containerd[1489]: time="2025-05-08T00:20:28.399653451Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.346665563s" May 8 00:20:28.400036 containerd[1489]: time="2025-05-08T00:20:28.399695261Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:20:28.420839 containerd[1489]: time="2025-05-08T00:20:28.420784463Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:20:29.102700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744721724.mount: Deactivated successfully. May 8 00:20:29.106476 containerd[1489]: time="2025-05-08T00:20:29.105992533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:29.106706 containerd[1489]: time="2025-05-08T00:20:29.106664175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 8 00:20:29.107398 containerd[1489]: time="2025-05-08T00:20:29.107129056Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:29.109250 containerd[1489]: time="2025-05-08T00:20:29.109206460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:29.110292 containerd[1489]: time="2025-05-08T00:20:29.110079192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 689.139528ms" May 8 00:20:29.110292 containerd[1489]: time="2025-05-08T00:20:29.110109012Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 8 00:20:29.132538 containerd[1489]: time="2025-05-08T00:20:29.132501276Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:20:29.889816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount975639451.mount: Deactivated successfully. May 8 00:20:31.090359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:20:31.095892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:20:31.236585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:20:31.241606 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:20:31.284708 kubelet[2127]: E0508 00:20:31.284676 2127 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:20:31.288369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:20:31.288578 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:20:31.288927 systemd[1]: kubelet.service: Consumed 178ms CPU time, 96.4M memory peak. May 8 00:20:32.331984 containerd[1489]: time="2025-05-08T00:20:32.331873264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:32.335869 containerd[1489]: time="2025-05-08T00:20:32.333806428Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 8 00:20:32.339632 containerd[1489]: time="2025-05-08T00:20:32.339567589Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:32.342768 containerd[1489]: time="2025-05-08T00:20:32.342730235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:20:32.344537 containerd[1489]: time="2025-05-08T00:20:32.343503297Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.210972511s" May 8 00:20:32.344537 containerd[1489]: time="2025-05-08T00:20:32.343537967Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 8 00:20:34.662367 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:20:34.662530 systemd[1]: kubelet.service: Consumed 178ms CPU time, 96.4M memory peak. May 8 00:20:34.669630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:20:34.694054 systemd[1]: Reload requested from client PID 2208 ('systemctl') (unit session-7.scope)... May 8 00:20:34.694071 systemd[1]: Reloading... May 8 00:20:34.849524 zram_generator::config[2256]: No configuration found. May 8 00:20:34.955270 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:20:35.044743 systemd[1]: Reloading finished in 350 ms. May 8 00:20:35.090772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:20:35.094769 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:20:35.100224 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:20:35.101408 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:20:35.101695 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:20:35.101745 systemd[1]: kubelet.service: Consumed 126ms CPU time, 85M memory peak. May 8 00:20:35.109791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:20:35.237905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:20:35.243544 (kubelet)[2314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:20:35.282316 kubelet[2314]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:20:35.282316 kubelet[2314]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:20:35.282316 kubelet[2314]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:20:35.282613 kubelet[2314]: I0508 00:20:35.282368 2314 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:20:35.704543 kubelet[2314]: I0508 00:20:35.704500 2314 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:20:35.704543 kubelet[2314]: I0508 00:20:35.704525 2314 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:20:35.704730 kubelet[2314]: I0508 00:20:35.704717 2314 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:20:35.725257 kubelet[2314]: I0508 00:20:35.724545 2314 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:20:35.725257 kubelet[2314]: E0508 00:20:35.725251 2314 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.232.0.241:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:35.734949 kubelet[2314]: I0508 00:20:35.734893 2314 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:20:35.737309 kubelet[2314]: I0508 00:20:35.737255 2314 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:20:35.737473 kubelet[2314]: I0508 00:20:35.737288 2314 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-0-241","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:20:35.737996 kubelet[2314]: I0508 00:20:35.737965 2314 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:20:35.737996 kubelet[2314]: I0508 00:20:35.737985 2314 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:20:35.738117 kubelet[2314]: I0508 00:20:35.738091 2314 state_mem.go:36] "Initialized new in-memory state store" May 8 00:20:35.739252 kubelet[2314]: I0508 00:20:35.739004 2314 kubelet.go:400] "Attempting to sync node with API server" May 8 00:20:35.739252 kubelet[2314]: I0508 00:20:35.739024 2314 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:20:35.739252 kubelet[2314]: I0508 00:20:35.739043 2314 kubelet.go:312] "Adding apiserver pod source" May 8 00:20:35.739252 kubelet[2314]: I0508 00:20:35.739065 2314 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:20:35.740414 kubelet[2314]: W0508 00:20:35.739536 2314 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.232.0.241:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-0-241&limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:35.740414 kubelet[2314]: E0508 00:20:35.739604 2314 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.232.0.241:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-0-241&limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:35.744240 kubelet[2314]: W0508 00:20:35.744097 2314 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.232.0.241:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:35.744240 kubelet[2314]: E0508 00:20:35.744140 2314 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.232.0.241:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:35.744391 kubelet[2314]: I0508 00:20:35.744375 2314 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:20:35.746621 kubelet[2314]: I0508 00:20:35.745731 2314 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:20:35.746621 kubelet[2314]: W0508 00:20:35.745785 2314 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:20:35.746621 kubelet[2314]: I0508 00:20:35.746321 2314 server.go:1264] "Started kubelet" May 8 00:20:35.748678 kubelet[2314]: I0508 00:20:35.748314 2314 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:20:35.749517 kubelet[2314]: I0508 00:20:35.749206 2314 server.go:455] "Adding debug handlers to kubelet server" May 8 00:20:35.751854 kubelet[2314]: I0508 00:20:35.751816 2314 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:20:35.752128 kubelet[2314]: I0508 00:20:35.752114 2314 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:20:35.752372 kubelet[2314]: E0508 00:20:35.752295 2314 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.0.241:6443/api/v1/namespaces/default/events\": dial tcp 172.232.0.241:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-0-241.183d65528ec03687 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-0-241,UID:172-232-0-241,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-0-241,},FirstTimestamp:2025-05-08 00:20:35.746305671 +0000 UTC m=+0.498811838,LastTimestamp:2025-05-08 00:20:35.746305671 +0000 UTC m=+0.498811838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-0-241,}" May 8 00:20:35.753040 kubelet[2314]: I0508 00:20:35.752407 2314 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:20:35.755389 kubelet[2314]: I0508 00:20:35.755153 2314 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:20:35.756255 kubelet[2314]: I0508 00:20:35.756228 2314 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:20:35.756301 kubelet[2314]: I0508 00:20:35.756283 2314 reconciler.go:26] "Reconciler: start to sync state" May 8 00:20:35.756639 kubelet[2314]: W0508 00:20:35.756602 2314 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.232.0.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:35.756639 kubelet[2314]: E0508 00:20:35.756640 2314 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.232.0.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:35.756845 kubelet[2314]: E0508 00:20:35.756810 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.0.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-0-241?timeout=10s\": dial tcp 172.232.0.241:6443: connect: connection refused" interval="200ms" May 8 00:20:35.757482 kubelet[2314]: I0508 00:20:35.757435 2314 factory.go:221] Registration of the systemd container factory successfully May 8 00:20:35.757565 kubelet[2314]: I0508 00:20:35.757537 2314 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:20:35.762044 kubelet[2314]: E0508 00:20:35.762014 2314 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:20:35.762138 kubelet[2314]: I0508 00:20:35.762115 2314 factory.go:221] Registration of the containerd container factory successfully May 8 00:20:35.773434 kubelet[2314]: I0508 00:20:35.773412 2314 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:20:35.774841 kubelet[2314]: I0508 00:20:35.774605 2314 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:20:35.774841 kubelet[2314]: I0508 00:20:35.774627 2314 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:20:35.774841 kubelet[2314]: I0508 00:20:35.774642 2314 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:20:35.774841 kubelet[2314]: E0508 00:20:35.774679 2314 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:20:35.784010 kubelet[2314]: W0508 00:20:35.783955 2314 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.232.0.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:35.784010 kubelet[2314]: E0508 00:20:35.784001 2314 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.232.0.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:35.795553 kubelet[2314]: I0508 00:20:35.795417 2314 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:20:35.795553 kubelet[2314]: I0508 00:20:35.795464 2314 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:20:35.795553 kubelet[2314]: I0508 00:20:35.795489 2314 state_mem.go:36] "Initialized new in-memory state store" May 8 00:20:35.797963 kubelet[2314]: I0508 00:20:35.797945 2314 policy_none.go:49] "None policy: Start" May 8 00:20:35.798542 kubelet[2314]: I0508 00:20:35.798524 2314 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:20:35.798873 kubelet[2314]: I0508 00:20:35.798619 2314 state_mem.go:35] "Initializing new in-memory state store" May 8 00:20:35.805835 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:20:35.821499 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:20:35.825623 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:20:35.836379 kubelet[2314]: I0508 00:20:35.836341 2314 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:20:35.837143 kubelet[2314]: I0508 00:20:35.836781 2314 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:20:35.837143 kubelet[2314]: I0508 00:20:35.836908 2314 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:20:35.839415 kubelet[2314]: E0508 00:20:35.839389 2314 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-0-241\" not found" May 8 00:20:35.857269 kubelet[2314]: I0508 00:20:35.857250 2314 kubelet_node_status.go:73] "Attempting to register node" node="172-232-0-241" May 8 00:20:35.857741 kubelet[2314]: E0508 00:20:35.857681 2314 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.232.0.241:6443/api/v1/nodes\": dial tcp 172.232.0.241:6443: connect: connection refused" node="172-232-0-241" May 8 00:20:35.875084 kubelet[2314]: I0508 00:20:35.875048 2314 topology_manager.go:215] "Topology Admit Handler" podUID="90a3333af83af06a3f32153410a2739d" podNamespace="kube-system" podName="kube-apiserver-172-232-0-241" May 8 00:20:35.876486 kubelet[2314]: I0508 00:20:35.876459 2314 topology_manager.go:215] "Topology Admit Handler" podUID="e9c053f40250b5d771cec3a345cd010b" podNamespace="kube-system" podName="kube-controller-manager-172-232-0-241" May 8 00:20:35.877927 kubelet[2314]: I0508 00:20:35.877901 2314 topology_manager.go:215] "Topology Admit Handler" podUID="9ce05f879fe6cea4f177740df48ae559" podNamespace="kube-system" podName="kube-scheduler-172-232-0-241" May 8 00:20:35.884514 systemd[1]: Created slice kubepods-burstable-pod90a3333af83af06a3f32153410a2739d.slice - libcontainer container kubepods-burstable-pod90a3333af83af06a3f32153410a2739d.slice. May 8 00:20:35.898982 systemd[1]: Created slice kubepods-burstable-pode9c053f40250b5d771cec3a345cd010b.slice - libcontainer container kubepods-burstable-pode9c053f40250b5d771cec3a345cd010b.slice. May 8 00:20:35.910932 systemd[1]: Created slice kubepods-burstable-pod9ce05f879fe6cea4f177740df48ae559.slice - libcontainer container kubepods-burstable-pod9ce05f879fe6cea4f177740df48ae559.slice. May 8 00:20:35.957302 kubelet[2314]: E0508 00:20:35.957206 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.0.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-0-241?timeout=10s\": dial tcp 172.232.0.241:6443: connect: connection refused" interval="400ms" May 8 00:20:36.057521 kubelet[2314]: I0508 00:20:36.057483 2314 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90a3333af83af06a3f32153410a2739d-ca-certs\") pod \"kube-apiserver-172-232-0-241\" (UID: \"90a3333af83af06a3f32153410a2739d\") " pod="kube-system/kube-apiserver-172-232-0-241" May 8 00:20:36.057521 kubelet[2314]: I0508 00:20:36.057515 2314 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9c053f40250b5d771cec3a345cd010b-flexvolume-dir\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"e9c053f40250b5d771cec3a345cd010b\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 8 00:20:36.057609 kubelet[2314]: I0508 00:20:36.057542 2314 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9c053f40250b5d771cec3a345cd010b-k8s-certs\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"e9c053f40250b5d771cec3a345cd010b\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 8 00:20:36.057609 kubelet[2314]: I0508 00:20:36.057561 2314 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9c053f40250b5d771cec3a345cd010b-kubeconfig\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"e9c053f40250b5d771cec3a345cd010b\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 8 00:20:36.057609 kubelet[2314]: I0508 00:20:36.057580 2314 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9c053f40250b5d771cec3a345cd010b-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"e9c053f40250b5d771cec3a345cd010b\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 8 00:20:36.057609 kubelet[2314]: I0508 00:20:36.057596 2314 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ce05f879fe6cea4f177740df48ae559-kubeconfig\") pod \"kube-scheduler-172-232-0-241\" (UID: \"9ce05f879fe6cea4f177740df48ae559\") " pod="kube-system/kube-scheduler-172-232-0-241" May 8 00:20:36.057702 kubelet[2314]: I0508 00:20:36.057612 2314 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90a3333af83af06a3f32153410a2739d-k8s-certs\") pod \"kube-apiserver-172-232-0-241\" (UID: \"90a3333af83af06a3f32153410a2739d\") " pod="kube-system/kube-apiserver-172-232-0-241" May 8 00:20:36.057702 kubelet[2314]: I0508 00:20:36.057629 2314 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90a3333af83af06a3f32153410a2739d-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-0-241\" (UID: \"90a3333af83af06a3f32153410a2739d\") " pod="kube-system/kube-apiserver-172-232-0-241" May 8 00:20:36.057702 kubelet[2314]: I0508 00:20:36.057645 2314 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9c053f40250b5d771cec3a345cd010b-ca-certs\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"e9c053f40250b5d771cec3a345cd010b\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 8 00:20:36.059139 kubelet[2314]: I0508 00:20:36.059047 2314 kubelet_node_status.go:73] "Attempting to register node" node="172-232-0-241" May 8 00:20:36.059525 kubelet[2314]: E0508 00:20:36.059496 2314 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.232.0.241:6443/api/v1/nodes\": dial tcp 172.232.0.241:6443: connect: connection refused" node="172-232-0-241" May 8 00:20:36.197364 kubelet[2314]: E0508 00:20:36.197320 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:36.197993 containerd[1489]: time="2025-05-08T00:20:36.197943184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-0-241,Uid:90a3333af83af06a3f32153410a2739d,Namespace:kube-system,Attempt:0,}" May 8 00:20:36.208635 kubelet[2314]: E0508 00:20:36.208547 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:36.208948 containerd[1489]: time="2025-05-08T00:20:36.208859386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-0-241,Uid:e9c053f40250b5d771cec3a345cd010b,Namespace:kube-system,Attempt:0,}" May 8 00:20:36.213419 kubelet[2314]: E0508 00:20:36.213391 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:36.213762 containerd[1489]: time="2025-05-08T00:20:36.213692256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-0-241,Uid:9ce05f879fe6cea4f177740df48ae559,Namespace:kube-system,Attempt:0,}" May 8 00:20:36.358160 kubelet[2314]: E0508 00:20:36.358109 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.0.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-0-241?timeout=10s\": dial tcp 172.232.0.241:6443: connect: connection refused" interval="800ms" May 8 00:20:36.462168 kubelet[2314]: I0508 00:20:36.462040 2314 kubelet_node_status.go:73] "Attempting to register node" node="172-232-0-241" May 8 00:20:36.462540 kubelet[2314]: E0508 00:20:36.462371 2314 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.232.0.241:6443/api/v1/nodes\": dial tcp 172.232.0.241:6443: connect: connection refused" node="172-232-0-241" May 8 00:20:36.649100 kubelet[2314]: W0508 00:20:36.649008 2314 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.232.0.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:36.649100 kubelet[2314]: E0508 00:20:36.649099 2314 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.232.0.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:36.712006 kubelet[2314]: W0508 00:20:36.711951 2314 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.232.0.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:36.712006 kubelet[2314]: E0508 00:20:36.712002 2314 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.232.0.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:36.836285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount358919754.mount: Deactivated successfully. May 8 00:20:36.841085 containerd[1489]: time="2025-05-08T00:20:36.841022980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:20:36.842123 containerd[1489]: time="2025-05-08T00:20:36.842092892Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:20:36.843065 containerd[1489]: time="2025-05-08T00:20:36.843022804Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:20:36.843439 containerd[1489]: time="2025-05-08T00:20:36.843317365Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:20:36.844362 containerd[1489]: time="2025-05-08T00:20:36.844323187Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:20:36.846198 containerd[1489]: time="2025-05-08T00:20:36.846159880Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:20:36.847476 containerd[1489]: time="2025-05-08T00:20:36.846775811Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:20:36.848939 containerd[1489]: time="2025-05-08T00:20:36.848896696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:20:36.850581 containerd[1489]: time="2025-05-08T00:20:36.850536459Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 652.489025ms" May 8 00:20:36.852129 containerd[1489]: time="2025-05-08T00:20:36.852095532Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 638.346626ms" May 8 00:20:36.852710 containerd[1489]: time="2025-05-08T00:20:36.852683403Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 643.736937ms" May 8 00:20:36.963523 containerd[1489]: time="2025-05-08T00:20:36.963341105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:20:36.963523 containerd[1489]: time="2025-05-08T00:20:36.963404015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:20:36.963523 containerd[1489]: time="2025-05-08T00:20:36.963417665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:36.964877 containerd[1489]: time="2025-05-08T00:20:36.964504217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:36.970482 containerd[1489]: time="2025-05-08T00:20:36.970291068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:20:36.970482 containerd[1489]: time="2025-05-08T00:20:36.970336739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:20:36.970482 containerd[1489]: time="2025-05-08T00:20:36.970352589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:36.970862 containerd[1489]: time="2025-05-08T00:20:36.970709839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:36.971688 containerd[1489]: time="2025-05-08T00:20:36.969191926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:20:36.972632 containerd[1489]: time="2025-05-08T00:20:36.972553113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:20:36.972814 containerd[1489]: time="2025-05-08T00:20:36.972611003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:36.972883 containerd[1489]: time="2025-05-08T00:20:36.972783893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:36.984477 kubelet[2314]: W0508 00:20:36.984377 2314 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.232.0.241:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-0-241&limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:36.984477 kubelet[2314]: E0508 00:20:36.984433 2314 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.232.0.241:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-0-241&limit=500&resourceVersion=0": dial tcp 172.232.0.241:6443: connect: connection refused May 8 00:20:36.997644 systemd[1]: Started cri-containerd-bd27dc4ee2b420e83dfe961a6721cf5fd0dd1ed494e0228249b05102fbdc2241.scope - libcontainer container bd27dc4ee2b420e83dfe961a6721cf5fd0dd1ed494e0228249b05102fbdc2241. May 8 00:20:37.003588 systemd[1]: Started cri-containerd-7b781f1702f140a38b36f1e9a3c439483a8f9cce66e385c0fb5547615cff7d95.scope - libcontainer container 7b781f1702f140a38b36f1e9a3c439483a8f9cce66e385c0fb5547615cff7d95. May 8 00:20:37.005929 systemd[1]: Started cri-containerd-ba3c86ccb9d555a0ffa2dea6be357e8dc7fd0767d37a7e9e8b428a4d28b927c0.scope - libcontainer container ba3c86ccb9d555a0ffa2dea6be357e8dc7fd0767d37a7e9e8b428a4d28b927c0. May 8 00:20:37.069818 containerd[1489]: time="2025-05-08T00:20:37.069774567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-0-241,Uid:9ce05f879fe6cea4f177740df48ae559,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd27dc4ee2b420e83dfe961a6721cf5fd0dd1ed494e0228249b05102fbdc2241\"" May 8 00:20:37.071538 kubelet[2314]: E0508 00:20:37.071436 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:37.077333 containerd[1489]: time="2025-05-08T00:20:37.076399181Z" level=info msg="CreateContainer within sandbox \"bd27dc4ee2b420e83dfe961a6721cf5fd0dd1ed494e0228249b05102fbdc2241\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:20:37.082661 containerd[1489]: time="2025-05-08T00:20:37.082638003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-0-241,Uid:90a3333af83af06a3f32153410a2739d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b781f1702f140a38b36f1e9a3c439483a8f9cce66e385c0fb5547615cff7d95\"" May 8 00:20:37.083081 containerd[1489]: time="2025-05-08T00:20:37.082893604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-0-241,Uid:e9c053f40250b5d771cec3a345cd010b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba3c86ccb9d555a0ffa2dea6be357e8dc7fd0767d37a7e9e8b428a4d28b927c0\"" May 8 00:20:37.083770 kubelet[2314]: E0508 00:20:37.083742 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:37.084421 kubelet[2314]: E0508 00:20:37.083907 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:37.088666 containerd[1489]: time="2025-05-08T00:20:37.088086234Z" level=info msg="CreateContainer within sandbox \"7b781f1702f140a38b36f1e9a3c439483a8f9cce66e385c0fb5547615cff7d95\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:20:37.088666 containerd[1489]: time="2025-05-08T00:20:37.088211184Z" level=info msg="CreateContainer within sandbox \"ba3c86ccb9d555a0ffa2dea6be357e8dc7fd0767d37a7e9e8b428a4d28b927c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:20:37.104858 containerd[1489]: time="2025-05-08T00:20:37.104589387Z" level=info msg="CreateContainer within sandbox \"bd27dc4ee2b420e83dfe961a6721cf5fd0dd1ed494e0228249b05102fbdc2241\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e56f3cf835cd2c14e54ec663f2a72318337cc9760de56fd7ee67647924ca5ec6\"" May 8 00:20:37.105245 containerd[1489]: time="2025-05-08T00:20:37.105211888Z" level=info msg="StartContainer for \"e56f3cf835cd2c14e54ec663f2a72318337cc9760de56fd7ee67647924ca5ec6\"" May 8 00:20:37.108856 containerd[1489]: time="2025-05-08T00:20:37.108791515Z" level=info msg="CreateContainer within sandbox \"ba3c86ccb9d555a0ffa2dea6be357e8dc7fd0767d37a7e9e8b428a4d28b927c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61e2cf0e090c61d63ed8968acf9899e76f48f760607c160af8734867c65b340e\"" May 8 00:20:37.110856 containerd[1489]: time="2025-05-08T00:20:37.109735507Z" level=info msg="StartContainer for \"61e2cf0e090c61d63ed8968acf9899e76f48f760607c160af8734867c65b340e\"" May 8 00:20:37.113833 containerd[1489]: time="2025-05-08T00:20:37.113812175Z" level=info msg="CreateContainer within sandbox \"7b781f1702f140a38b36f1e9a3c439483a8f9cce66e385c0fb5547615cff7d95\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"17e1a82a95ce4e3392be8e23e9e176725181ba37838acf0303045e3507217f99\"" May 8 00:20:37.114228 containerd[1489]: time="2025-05-08T00:20:37.114192056Z" level=info msg="StartContainer for \"17e1a82a95ce4e3392be8e23e9e176725181ba37838acf0303045e3507217f99\"" May 8 00:20:37.147847 systemd[1]: Started cri-containerd-e56f3cf835cd2c14e54ec663f2a72318337cc9760de56fd7ee67647924ca5ec6.scope - libcontainer container e56f3cf835cd2c14e54ec663f2a72318337cc9760de56fd7ee67647924ca5ec6. May 8 00:20:37.151603 systemd[1]: Started cri-containerd-17e1a82a95ce4e3392be8e23e9e176725181ba37838acf0303045e3507217f99.scope - libcontainer container 17e1a82a95ce4e3392be8e23e9e176725181ba37838acf0303045e3507217f99. May 8 00:20:37.160399 kubelet[2314]: E0508 00:20:37.160327 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.0.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-0-241?timeout=10s\": dial tcp 172.232.0.241:6443: connect: connection refused" interval="1.6s" May 8 00:20:37.167593 systemd[1]: Started cri-containerd-61e2cf0e090c61d63ed8968acf9899e76f48f760607c160af8734867c65b340e.scope - libcontainer container 61e2cf0e090c61d63ed8968acf9899e76f48f760607c160af8734867c65b340e. May 8 00:20:37.223754 containerd[1489]: time="2025-05-08T00:20:37.223698865Z" level=info msg="StartContainer for \"17e1a82a95ce4e3392be8e23e9e176725181ba37838acf0303045e3507217f99\" returns successfully" May 8 00:20:37.250634 containerd[1489]: time="2025-05-08T00:20:37.250588269Z" level=info msg="StartContainer for \"e56f3cf835cd2c14e54ec663f2a72318337cc9760de56fd7ee67647924ca5ec6\" returns successfully" May 8 00:20:37.254346 containerd[1489]: time="2025-05-08T00:20:37.254311326Z" level=info msg="StartContainer for \"61e2cf0e090c61d63ed8968acf9899e76f48f760607c160af8734867c65b340e\" returns successfully" May 8 00:20:37.265562 kubelet[2314]: I0508 00:20:37.265536 2314 kubelet_node_status.go:73] "Attempting to register node" node="172-232-0-241" May 8 00:20:37.265925 kubelet[2314]: E0508 00:20:37.265901 2314 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.232.0.241:6443/api/v1/nodes\": dial tcp 172.232.0.241:6443: connect: connection refused" node="172-232-0-241" May 8 00:20:37.805999 kubelet[2314]: E0508 00:20:37.805957 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:37.806785 kubelet[2314]: E0508 00:20:37.806761 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:37.810257 kubelet[2314]: E0508 00:20:37.810234 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:38.764398 kubelet[2314]: E0508 00:20:38.764307 2314 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-0-241\" not found" node="172-232-0-241" May 8 00:20:38.765291 kubelet[2314]: E0508 00:20:38.765269 2314 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172-232-0-241" not found May 8 00:20:38.812830 kubelet[2314]: E0508 00:20:38.812803 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:38.868318 kubelet[2314]: I0508 00:20:38.868292 2314 kubelet_node_status.go:73] "Attempting to register node" node="172-232-0-241" May 8 00:20:38.872961 kubelet[2314]: I0508 00:20:38.872920 2314 kubelet_node_status.go:76] "Successfully registered node" node="172-232-0-241" May 8 00:20:38.879131 kubelet[2314]: E0508 00:20:38.879092 2314 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 8 00:20:38.980234 kubelet[2314]: E0508 00:20:38.980195 2314 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 8 00:20:38.996308 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 8 00:20:39.080741 kubelet[2314]: E0508 00:20:39.080640 2314 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 8 00:20:39.181703 kubelet[2314]: E0508 00:20:39.181666 2314 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 8 00:20:39.282517 kubelet[2314]: E0508 00:20:39.282474 2314 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 8 00:20:39.344366 kubelet[2314]: E0508 00:20:39.343972 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:39.382900 kubelet[2314]: E0508 00:20:39.382671 2314 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 8 00:20:39.483331 kubelet[2314]: E0508 00:20:39.483301 2314 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 8 00:20:39.584335 kubelet[2314]: E0508 00:20:39.584271 2314 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 8 00:20:39.685728 kubelet[2314]: E0508 00:20:39.685458 2314 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 8 00:20:39.786247 kubelet[2314]: E0508 00:20:39.786158 2314 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172-232-0-241\" not found" May 8 00:20:40.178165 systemd[1]: Reload requested from client PID 2589 ('systemctl') (unit session-7.scope)... May 8 00:20:40.178187 systemd[1]: Reloading... May 8 00:20:40.299520 zram_generator::config[2636]: No configuration found. May 8 00:20:40.412506 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:20:40.517576 systemd[1]: Reloading finished in 338 ms. May 8 00:20:40.549576 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:20:40.569281 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:20:40.569928 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:20:40.569985 systemd[1]: kubelet.service: Consumed 868ms CPU time, 116.4M memory peak. May 8 00:20:40.580475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:20:40.745041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:20:40.751139 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:20:40.804478 kubelet[2684]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:20:40.804478 kubelet[2684]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:20:40.804478 kubelet[2684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:20:40.804478 kubelet[2684]: I0508 00:20:40.804296 2684 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:20:40.811471 kubelet[2684]: I0508 00:20:40.811416 2684 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:20:40.811471 kubelet[2684]: I0508 00:20:40.811437 2684 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:20:40.812685 kubelet[2684]: I0508 00:20:40.811823 2684 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:20:40.813747 kubelet[2684]: I0508 00:20:40.813714 2684 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:20:40.815601 kubelet[2684]: I0508 00:20:40.815177 2684 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:20:40.823121 kubelet[2684]: I0508 00:20:40.823094 2684 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:20:40.823563 kubelet[2684]: I0508 00:20:40.823521 2684 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:20:40.823881 kubelet[2684]: I0508 00:20:40.823563 2684 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-0-241","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:20:40.824064 kubelet[2684]: I0508 00:20:40.823891 2684 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:20:40.824064 kubelet[2684]: I0508 00:20:40.823903 2684 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:20:40.824064 kubelet[2684]: I0508 00:20:40.823952 2684 state_mem.go:36] "Initialized new in-memory state store" May 8 00:20:40.824064 kubelet[2684]: I0508 00:20:40.824051 2684 kubelet.go:400] "Attempting to sync node with API server" May 8 00:20:40.824064 kubelet[2684]: I0508 00:20:40.824063 2684 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:20:40.824288 kubelet[2684]: I0508 00:20:40.824088 2684 kubelet.go:312] "Adding apiserver pod source" May 8 00:20:40.824288 kubelet[2684]: I0508 00:20:40.824104 2684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:20:40.827363 kubelet[2684]: I0508 00:20:40.827335 2684 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 00:20:40.827654 kubelet[2684]: I0508 00:20:40.827543 2684 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:20:40.828010 kubelet[2684]: I0508 00:20:40.827973 2684 server.go:1264] "Started kubelet" May 8 00:20:40.833747 kubelet[2684]: I0508 00:20:40.833385 2684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:20:40.838150 kubelet[2684]: I0508 00:20:40.837729 2684 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:20:40.838714 kubelet[2684]: I0508 00:20:40.838699 2684 server.go:455] "Adding debug handlers to kubelet server" May 8 00:20:40.839730 kubelet[2684]: I0508 00:20:40.839692 2684 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:20:40.839975 kubelet[2684]: I0508 00:20:40.839961 2684 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:20:40.844028 kubelet[2684]: I0508 00:20:40.844014 2684 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:20:40.844205 kubelet[2684]: I0508 00:20:40.844193 2684 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:20:40.844487 kubelet[2684]: I0508 00:20:40.844440 2684 reconciler.go:26] "Reconciler: start to sync state" May 8 00:20:40.845510 kubelet[2684]: I0508 00:20:40.845492 2684 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:20:40.846483 kubelet[2684]: I0508 00:20:40.846430 2684 factory.go:221] Registration of the containerd container factory successfully May 8 00:20:40.846578 kubelet[2684]: I0508 00:20:40.846568 2684 factory.go:221] Registration of the systemd container factory successfully May 8 00:20:40.851479 kubelet[2684]: E0508 00:20:40.851400 2684 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:20:40.851966 kubelet[2684]: I0508 00:20:40.851928 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:20:40.855069 kubelet[2684]: I0508 00:20:40.855037 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:20:40.855121 kubelet[2684]: I0508 00:20:40.855078 2684 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:20:40.855121 kubelet[2684]: I0508 00:20:40.855096 2684 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:20:40.855175 kubelet[2684]: E0508 00:20:40.855143 2684 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:20:40.901629 kubelet[2684]: I0508 00:20:40.901595 2684 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:20:40.901629 kubelet[2684]: I0508 00:20:40.901625 2684 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:20:40.901740 kubelet[2684]: I0508 00:20:40.901648 2684 state_mem.go:36] "Initialized new in-memory state store" May 8 00:20:40.901827 kubelet[2684]: I0508 00:20:40.901804 2684 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:20:40.901868 kubelet[2684]: I0508 00:20:40.901824 2684 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:20:40.901868 kubelet[2684]: I0508 00:20:40.901848 2684 policy_none.go:49] "None policy: Start" May 8 00:20:40.903564 kubelet[2684]: I0508 00:20:40.902706 2684 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:20:40.903564 kubelet[2684]: I0508 00:20:40.902728 2684 state_mem.go:35] "Initializing new in-memory state store" May 8 00:20:40.903564 kubelet[2684]: I0508 00:20:40.902885 2684 state_mem.go:75] "Updated machine memory state" May 8 00:20:40.908016 kubelet[2684]: I0508 00:20:40.907997 2684 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:20:40.908648 kubelet[2684]: I0508 00:20:40.908155 2684 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:20:40.908648 kubelet[2684]: I0508 00:20:40.908260 2684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:20:40.945524 kubelet[2684]: I0508 00:20:40.945492 2684 kubelet_node_status.go:73] "Attempting to register node" node="172-232-0-241" May 8 00:20:40.952299 kubelet[2684]: I0508 00:20:40.952259 2684 kubelet_node_status.go:112] "Node was previously registered" node="172-232-0-241" May 8 00:20:40.952352 kubelet[2684]: I0508 00:20:40.952318 2684 kubelet_node_status.go:76] "Successfully registered node" node="172-232-0-241" May 8 00:20:40.955747 kubelet[2684]: I0508 00:20:40.955490 2684 topology_manager.go:215] "Topology Admit Handler" podUID="90a3333af83af06a3f32153410a2739d" podNamespace="kube-system" podName="kube-apiserver-172-232-0-241" May 8 00:20:40.956057 kubelet[2684]: I0508 00:20:40.955938 2684 topology_manager.go:215] "Topology Admit Handler" podUID="e9c053f40250b5d771cec3a345cd010b" podNamespace="kube-system" podName="kube-controller-manager-172-232-0-241" May 8 00:20:40.956057 kubelet[2684]: I0508 00:20:40.956020 2684 topology_manager.go:215] "Topology Admit Handler" podUID="9ce05f879fe6cea4f177740df48ae559" podNamespace="kube-system" podName="kube-scheduler-172-232-0-241" May 8 00:20:41.044859 kubelet[2684]: I0508 00:20:41.044831 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90a3333af83af06a3f32153410a2739d-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-0-241\" (UID: \"90a3333af83af06a3f32153410a2739d\") " pod="kube-system/kube-apiserver-172-232-0-241" May 8 00:20:41.045030 kubelet[2684]: I0508 00:20:41.044862 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9c053f40250b5d771cec3a345cd010b-ca-certs\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"e9c053f40250b5d771cec3a345cd010b\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 8 00:20:41.045030 kubelet[2684]: I0508 00:20:41.044887 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9c053f40250b5d771cec3a345cd010b-kubeconfig\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"e9c053f40250b5d771cec3a345cd010b\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 8 00:20:41.045030 kubelet[2684]: I0508 00:20:41.044902 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9c053f40250b5d771cec3a345cd010b-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"e9c053f40250b5d771cec3a345cd010b\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 8 00:20:41.045030 kubelet[2684]: I0508 00:20:41.044918 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90a3333af83af06a3f32153410a2739d-ca-certs\") pod \"kube-apiserver-172-232-0-241\" (UID: \"90a3333af83af06a3f32153410a2739d\") " pod="kube-system/kube-apiserver-172-232-0-241" May 8 00:20:41.045030 kubelet[2684]: I0508 00:20:41.044932 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90a3333af83af06a3f32153410a2739d-k8s-certs\") pod \"kube-apiserver-172-232-0-241\" (UID: \"90a3333af83af06a3f32153410a2739d\") " pod="kube-system/kube-apiserver-172-232-0-241" May 8 00:20:41.045153 kubelet[2684]: I0508 00:20:41.044960 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ce05f879fe6cea4f177740df48ae559-kubeconfig\") pod \"kube-scheduler-172-232-0-241\" (UID: \"9ce05f879fe6cea4f177740df48ae559\") " pod="kube-system/kube-scheduler-172-232-0-241" May 8 00:20:41.045153 kubelet[2684]: I0508 00:20:41.044977 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9c053f40250b5d771cec3a345cd010b-flexvolume-dir\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"e9c053f40250b5d771cec3a345cd010b\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 8 00:20:41.045153 kubelet[2684]: I0508 00:20:41.045005 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9c053f40250b5d771cec3a345cd010b-k8s-certs\") pod \"kube-controller-manager-172-232-0-241\" (UID: \"e9c053f40250b5d771cec3a345cd010b\") " pod="kube-system/kube-controller-manager-172-232-0-241" May 8 00:20:41.181438 sudo[2716]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:20:41.181802 sudo[2716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 00:20:41.266824 kubelet[2684]: E0508 00:20:41.266784 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:41.267091 kubelet[2684]: E0508 00:20:41.267068 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:41.268305 kubelet[2684]: E0508 00:20:41.267645 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:41.669578 sudo[2716]: pam_unix(sudo:session): session closed for user root May 8 00:20:41.827839 kubelet[2684]: I0508 00:20:41.826962 2684 apiserver.go:52] "Watching apiserver" May 8 00:20:41.844473 kubelet[2684]: I0508 00:20:41.844407 2684 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:20:41.883021 kubelet[2684]: E0508 00:20:41.882638 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:41.883260 kubelet[2684]: E0508 00:20:41.883245 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:41.886270 kubelet[2684]: E0508 00:20:41.886218 2684 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-232-0-241\" already exists" pod="kube-system/kube-apiserver-172-232-0-241" May 8 00:20:41.887106 kubelet[2684]: E0508 00:20:41.886986 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:41.913599 kubelet[2684]: I0508 00:20:41.913563 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-0-241" podStartSLOduration=1.913433723 podStartE2EDuration="1.913433723s" podCreationTimestamp="2025-05-08 00:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:20:41.912835351 +0000 UTC m=+1.157615875" watchObservedRunningTime="2025-05-08 00:20:41.913433723 +0000 UTC m=+1.158214237" May 8 00:20:41.914272 kubelet[2684]: I0508 00:20:41.914178 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-0-241" podStartSLOduration=1.914170952 podStartE2EDuration="1.914170952s" podCreationTimestamp="2025-05-08 00:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:20:41.90751497 +0000 UTC m=+1.152295484" watchObservedRunningTime="2025-05-08 00:20:41.914170952 +0000 UTC m=+1.158951466" May 8 00:20:41.920521 kubelet[2684]: I0508 00:20:41.920331 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-0-241" podStartSLOduration=1.9203220509999999 podStartE2EDuration="1.920322051s" podCreationTimestamp="2025-05-08 00:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:20:41.920186962 +0000 UTC m=+1.164967476" watchObservedRunningTime="2025-05-08 00:20:41.920322051 +0000 UTC m=+1.165102565" May 8 00:20:42.849655 sudo[1708]: pam_unix(sudo:session): session closed for user root May 8 00:20:42.881407 kubelet[2684]: E0508 00:20:42.881367 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:42.900351 sshd[1707]: Connection closed by 139.178.89.65 port 57630 May 8 00:20:42.900894 sshd-session[1705]: pam_unix(sshd:session): session closed for user core May 8 00:20:42.905737 systemd[1]: sshd@6-172.232.0.241:22-139.178.89.65:57630.service: Deactivated successfully. May 8 00:20:42.908950 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:20:42.909215 systemd[1]: session-7.scope: Consumed 4.345s CPU time, 291.3M memory peak. May 8 00:20:42.910433 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. May 8 00:20:42.911415 systemd-logind[1466]: Removed session 7. May 8 00:20:43.882788 kubelet[2684]: E0508 00:20:43.882757 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:44.291249 kubelet[2684]: E0508 00:20:44.291126 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:48.108133 kubelet[2684]: E0508 00:20:48.108094 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:48.887770 kubelet[2684]: E0508 00:20:48.887736 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:52.094505 kubelet[2684]: E0508 00:20:52.094255 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:52.893572 kubelet[2684]: E0508 00:20:52.893541 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:53.283900 update_engine[1468]: I20250508 00:20:53.283785 1468 update_attempter.cc:509] Updating boot flags... May 8 00:20:53.341538 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2764) May 8 00:20:53.413589 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2766) May 8 00:20:53.525314 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 42 scanned by (udev-worker) (2766) May 8 00:20:53.897544 kubelet[2684]: I0508 00:20:53.897510 2684 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:20:53.898345 containerd[1489]: time="2025-05-08T00:20:53.898307005Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:20:53.898977 kubelet[2684]: I0508 00:20:53.898772 2684 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:20:54.295374 kubelet[2684]: E0508 00:20:54.295050 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:54.670187 kubelet[2684]: I0508 00:20:54.670128 2684 topology_manager.go:215] "Topology Admit Handler" podUID="1325f5b3-af74-456b-8d2d-b62db3221864" podNamespace="kube-system" podName="kube-proxy-bx48s" May 8 00:20:54.679035 kubelet[2684]: I0508 00:20:54.678289 2684 topology_manager.go:215] "Topology Admit Handler" podUID="b90ad5d5-9c73-492d-ac40-0214d42fee56" podNamespace="kube-system" podName="cilium-p6shj" May 8 00:20:54.680428 systemd[1]: Created slice kubepods-besteffort-pod1325f5b3_af74_456b_8d2d_b62db3221864.slice - libcontainer container kubepods-besteffort-pod1325f5b3_af74_456b_8d2d_b62db3221864.slice. May 8 00:20:54.697104 systemd[1]: Created slice kubepods-burstable-podb90ad5d5_9c73_492d_ac40_0214d42fee56.slice - libcontainer container kubepods-burstable-podb90ad5d5_9c73_492d_ac40_0214d42fee56.slice. May 8 00:20:54.736103 kubelet[2684]: I0508 00:20:54.736079 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-cilium-run\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.736228 kubelet[2684]: I0508 00:20:54.736200 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clnxq\" (UniqueName: \"kubernetes.io/projected/1325f5b3-af74-456b-8d2d-b62db3221864-kube-api-access-clnxq\") pod \"kube-proxy-bx48s\" (UID: \"1325f5b3-af74-456b-8d2d-b62db3221864\") " pod="kube-system/kube-proxy-bx48s" May 8 00:20:54.736228 kubelet[2684]: I0508 00:20:54.736226 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b90ad5d5-9c73-492d-ac40-0214d42fee56-clustermesh-secrets\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.736228 kubelet[2684]: I0508 00:20:54.736243 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-xtables-lock\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.736228 kubelet[2684]: I0508 00:20:54.736258 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b90ad5d5-9c73-492d-ac40-0214d42fee56-cilium-config-path\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.736671 kubelet[2684]: I0508 00:20:54.736272 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksh2r\" (UniqueName: \"kubernetes.io/projected/b90ad5d5-9c73-492d-ac40-0214d42fee56-kube-api-access-ksh2r\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.736671 kubelet[2684]: I0508 00:20:54.736286 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1325f5b3-af74-456b-8d2d-b62db3221864-lib-modules\") pod \"kube-proxy-bx48s\" (UID: \"1325f5b3-af74-456b-8d2d-b62db3221864\") " pod="kube-system/kube-proxy-bx48s" May 8 00:20:54.736671 kubelet[2684]: I0508 00:20:54.736299 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-etc-cni-netd\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.736671 kubelet[2684]: I0508 00:20:54.736311 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-lib-modules\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.736671 kubelet[2684]: I0508 00:20:54.736326 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-host-proc-sys-kernel\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.736779 kubelet[2684]: I0508 00:20:54.736387 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-host-proc-sys-net\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.736779 kubelet[2684]: I0508 00:20:54.736424 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1325f5b3-af74-456b-8d2d-b62db3221864-xtables-lock\") pod \"kube-proxy-bx48s\" (UID: \"1325f5b3-af74-456b-8d2d-b62db3221864\") " pod="kube-system/kube-proxy-bx48s" May 8 00:20:54.736779 kubelet[2684]: I0508 00:20:54.736464 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-cni-path\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.736779 kubelet[2684]: I0508 00:20:54.736482 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-cilium-cgroup\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.736779 kubelet[2684]: I0508 00:20:54.736506 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b90ad5d5-9c73-492d-ac40-0214d42fee56-hubble-tls\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.736779 kubelet[2684]: I0508 00:20:54.736537 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1325f5b3-af74-456b-8d2d-b62db3221864-kube-proxy\") pod \"kube-proxy-bx48s\" (UID: \"1325f5b3-af74-456b-8d2d-b62db3221864\") " pod="kube-system/kube-proxy-bx48s" May 8 00:20:54.736918 kubelet[2684]: I0508 00:20:54.736554 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-bpf-maps\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.736918 kubelet[2684]: I0508 00:20:54.736574 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-hostproc\") pod \"cilium-p6shj\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " pod="kube-system/cilium-p6shj" May 8 00:20:54.976037 kubelet[2684]: I0508 00:20:54.975220 2684 topology_manager.go:215] "Topology Admit Handler" podUID="6fd34716-2694-4828-81df-ef45891c2edc" podNamespace="kube-system" podName="cilium-operator-599987898-pltsx" May 8 00:20:54.987131 systemd[1]: Created slice kubepods-besteffort-pod6fd34716_2694_4828_81df_ef45891c2edc.slice - libcontainer container kubepods-besteffort-pod6fd34716_2694_4828_81df_ef45891c2edc.slice. May 8 00:20:54.992823 kubelet[2684]: E0508 00:20:54.992782 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:54.993370 containerd[1489]: time="2025-05-08T00:20:54.993333629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bx48s,Uid:1325f5b3-af74-456b-8d2d-b62db3221864,Namespace:kube-system,Attempt:0,}" May 8 00:20:55.002642 kubelet[2684]: E0508 00:20:55.002615 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:55.002938 containerd[1489]: time="2025-05-08T00:20:55.002906920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p6shj,Uid:b90ad5d5-9c73-492d-ac40-0214d42fee56,Namespace:kube-system,Attempt:0,}" May 8 00:20:55.038127 kubelet[2684]: I0508 00:20:55.037956 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fd34716-2694-4828-81df-ef45891c2edc-cilium-config-path\") pod \"cilium-operator-599987898-pltsx\" (UID: \"6fd34716-2694-4828-81df-ef45891c2edc\") " pod="kube-system/cilium-operator-599987898-pltsx" May 8 00:20:55.039783 kubelet[2684]: I0508 00:20:55.039604 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzplz\" (UniqueName: \"kubernetes.io/projected/6fd34716-2694-4828-81df-ef45891c2edc-kube-api-access-dzplz\") pod \"cilium-operator-599987898-pltsx\" (UID: \"6fd34716-2694-4828-81df-ef45891c2edc\") " pod="kube-system/cilium-operator-599987898-pltsx" May 8 00:20:55.067755 containerd[1489]: time="2025-05-08T00:20:55.067598562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:20:55.076787 containerd[1489]: time="2025-05-08T00:20:55.068112094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:20:55.076787 containerd[1489]: time="2025-05-08T00:20:55.068128602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:55.076787 containerd[1489]: time="2025-05-08T00:20:55.068204995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:55.080890 containerd[1489]: time="2025-05-08T00:20:55.080755806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:20:55.081053 containerd[1489]: time="2025-05-08T00:20:55.080843908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:20:55.081613 containerd[1489]: time="2025-05-08T00:20:55.080858747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:55.081613 containerd[1489]: time="2025-05-08T00:20:55.081140070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:55.101286 systemd[1]: Started cri-containerd-497b1adb104b721ebb22cb80937aee21e1ecd91e8f87a2e44b320a20ae18a48b.scope - libcontainer container 497b1adb104b721ebb22cb80937aee21e1ecd91e8f87a2e44b320a20ae18a48b. May 8 00:20:55.112577 systemd[1]: Started cri-containerd-45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0.scope - libcontainer container 45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0. May 8 00:20:55.138092 containerd[1489]: time="2025-05-08T00:20:55.137891916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bx48s,Uid:1325f5b3-af74-456b-8d2d-b62db3221864,Namespace:kube-system,Attempt:0,} returns sandbox id \"497b1adb104b721ebb22cb80937aee21e1ecd91e8f87a2e44b320a20ae18a48b\"" May 8 00:20:55.139777 kubelet[2684]: E0508 00:20:55.139346 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:55.145991 containerd[1489]: time="2025-05-08T00:20:55.145955021Z" level=info msg="CreateContainer within sandbox \"497b1adb104b721ebb22cb80937aee21e1ecd91e8f87a2e44b320a20ae18a48b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:20:55.160694 containerd[1489]: time="2025-05-08T00:20:55.160519295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p6shj,Uid:b90ad5d5-9c73-492d-ac40-0214d42fee56,Namespace:kube-system,Attempt:0,} returns sandbox id \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\"" May 8 00:20:55.162732 kubelet[2684]: E0508 00:20:55.162047 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:55.165401 containerd[1489]: time="2025-05-08T00:20:55.165376436Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:20:55.169354 containerd[1489]: time="2025-05-08T00:20:55.169295754Z" level=info msg="CreateContainer within sandbox \"497b1adb104b721ebb22cb80937aee21e1ecd91e8f87a2e44b320a20ae18a48b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e73068d94f8346a0939e2596a8eb8fab447b0cfc7ea55c6dfdef5f08fc35b0b\"" May 8 00:20:55.170556 containerd[1489]: time="2025-05-08T00:20:55.170163213Z" level=info msg="StartContainer for \"7e73068d94f8346a0939e2596a8eb8fab447b0cfc7ea55c6dfdef5f08fc35b0b\"" May 8 00:20:55.198597 systemd[1]: Started cri-containerd-7e73068d94f8346a0939e2596a8eb8fab447b0cfc7ea55c6dfdef5f08fc35b0b.scope - libcontainer container 7e73068d94f8346a0939e2596a8eb8fab447b0cfc7ea55c6dfdef5f08fc35b0b. May 8 00:20:55.230167 containerd[1489]: time="2025-05-08T00:20:55.229651036Z" level=info msg="StartContainer for \"7e73068d94f8346a0939e2596a8eb8fab447b0cfc7ea55c6dfdef5f08fc35b0b\" returns successfully" May 8 00:20:55.292486 kubelet[2684]: E0508 00:20:55.292439 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:55.294358 containerd[1489]: time="2025-05-08T00:20:55.294278965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pltsx,Uid:6fd34716-2694-4828-81df-ef45891c2edc,Namespace:kube-system,Attempt:0,}" May 8 00:20:55.320147 containerd[1489]: time="2025-05-08T00:20:55.320062952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:20:55.320413 containerd[1489]: time="2025-05-08T00:20:55.320344066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:20:55.320863 containerd[1489]: time="2025-05-08T00:20:55.320390492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:55.320863 containerd[1489]: time="2025-05-08T00:20:55.320635359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:20:55.340625 systemd[1]: Started cri-containerd-ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9.scope - libcontainer container ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9. May 8 00:20:55.380221 containerd[1489]: time="2025-05-08T00:20:55.380175916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pltsx,Uid:6fd34716-2694-4828-81df-ef45891c2edc,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\"" May 8 00:20:55.381366 kubelet[2684]: E0508 00:20:55.381062 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:55.899216 kubelet[2684]: E0508 00:20:55.899173 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:20:55.907481 kubelet[2684]: I0508 00:20:55.907106 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bx48s" podStartSLOduration=1.907093715 podStartE2EDuration="1.907093715s" podCreationTimestamp="2025-05-08 00:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:20:55.906974506 +0000 UTC m=+15.151755020" watchObservedRunningTime="2025-05-08 00:20:55.907093715 +0000 UTC m=+15.151874229" May 8 00:20:59.061222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount503993732.mount: Deactivated successfully. May 8 00:21:00.549224 containerd[1489]: time="2025-05-08T00:21:00.549175158Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:00.550044 containerd[1489]: time="2025-05-08T00:21:00.550004092Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 8 00:21:00.550954 containerd[1489]: time="2025-05-08T00:21:00.550650330Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:00.552198 containerd[1489]: time="2025-05-08T00:21:00.552103563Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.386601989s" May 8 00:21:00.552198 containerd[1489]: time="2025-05-08T00:21:00.552136351Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:21:00.553424 containerd[1489]: time="2025-05-08T00:21:00.553025682Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:21:00.554309 containerd[1489]: time="2025-05-08T00:21:00.554286228Z" level=info msg="CreateContainer within sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:21:00.566928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266840275.mount: Deactivated successfully. May 8 00:21:00.571479 containerd[1489]: time="2025-05-08T00:21:00.571162768Z" level=info msg="CreateContainer within sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4557c1cf4e9ee72e68a6bbe239c824b1f2775c36b61be35bbf9dd51b16fd6746\"" May 8 00:21:00.572219 containerd[1489]: time="2025-05-08T00:21:00.572054359Z" level=info msg="StartContainer for \"4557c1cf4e9ee72e68a6bbe239c824b1f2775c36b61be35bbf9dd51b16fd6746\"" May 8 00:21:00.609569 systemd[1]: Started cri-containerd-4557c1cf4e9ee72e68a6bbe239c824b1f2775c36b61be35bbf9dd51b16fd6746.scope - libcontainer container 4557c1cf4e9ee72e68a6bbe239c824b1f2775c36b61be35bbf9dd51b16fd6746. May 8 00:21:00.635809 containerd[1489]: time="2025-05-08T00:21:00.635776850Z" level=info msg="StartContainer for \"4557c1cf4e9ee72e68a6bbe239c824b1f2775c36b61be35bbf9dd51b16fd6746\" returns successfully" May 8 00:21:00.646349 systemd[1]: cri-containerd-4557c1cf4e9ee72e68a6bbe239c824b1f2775c36b61be35bbf9dd51b16fd6746.scope: Deactivated successfully. May 8 00:21:00.739869 containerd[1489]: time="2025-05-08T00:21:00.739719231Z" level=info msg="shim disconnected" id=4557c1cf4e9ee72e68a6bbe239c824b1f2775c36b61be35bbf9dd51b16fd6746 namespace=k8s.io May 8 00:21:00.740065 containerd[1489]: time="2025-05-08T00:21:00.739860872Z" level=warning msg="cleaning up after shim disconnected" id=4557c1cf4e9ee72e68a6bbe239c824b1f2775c36b61be35bbf9dd51b16fd6746 namespace=k8s.io May 8 00:21:00.740065 containerd[1489]: time="2025-05-08T00:21:00.739893749Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:21:00.754485 containerd[1489]: time="2025-05-08T00:21:00.753523254Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:21:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:21:00.907915 kubelet[2684]: E0508 00:21:00.907484 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:00.912039 containerd[1489]: time="2025-05-08T00:21:00.909824491Z" level=info msg="CreateContainer within sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:21:00.921470 containerd[1489]: time="2025-05-08T00:21:00.921428551Z" level=info msg="CreateContainer within sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2aaf81a9bec3d4fe4621249978bf0de9d4d0fb6147674968423c8f667f3e71a9\"" May 8 00:21:00.922386 containerd[1489]: time="2025-05-08T00:21:00.921782777Z" level=info msg="StartContainer for \"2aaf81a9bec3d4fe4621249978bf0de9d4d0fb6147674968423c8f667f3e71a9\"" May 8 00:21:00.953580 systemd[1]: Started cri-containerd-2aaf81a9bec3d4fe4621249978bf0de9d4d0fb6147674968423c8f667f3e71a9.scope - libcontainer container 2aaf81a9bec3d4fe4621249978bf0de9d4d0fb6147674968423c8f667f3e71a9. May 8 00:21:00.982374 containerd[1489]: time="2025-05-08T00:21:00.982138332Z" level=info msg="StartContainer for \"2aaf81a9bec3d4fe4621249978bf0de9d4d0fb6147674968423c8f667f3e71a9\" returns successfully" May 8 00:21:01.000557 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:21:01.001091 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:21:01.001311 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:21:01.008554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:21:01.008774 systemd[1]: cri-containerd-2aaf81a9bec3d4fe4621249978bf0de9d4d0fb6147674968423c8f667f3e71a9.scope: Deactivated successfully. May 8 00:21:01.031703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:21:01.040147 containerd[1489]: time="2025-05-08T00:21:01.040053232Z" level=info msg="shim disconnected" id=2aaf81a9bec3d4fe4621249978bf0de9d4d0fb6147674968423c8f667f3e71a9 namespace=k8s.io May 8 00:21:01.040147 containerd[1489]: time="2025-05-08T00:21:01.040100680Z" level=warning msg="cleaning up after shim disconnected" id=2aaf81a9bec3d4fe4621249978bf0de9d4d0fb6147674968423c8f667f3e71a9 namespace=k8s.io May 8 00:21:01.040147 containerd[1489]: time="2025-05-08T00:21:01.040110500Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:21:01.565277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4557c1cf4e9ee72e68a6bbe239c824b1f2775c36b61be35bbf9dd51b16fd6746-rootfs.mount: Deactivated successfully. May 8 00:21:01.911513 kubelet[2684]: E0508 00:21:01.911484 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:01.916005 containerd[1489]: time="2025-05-08T00:21:01.915970542Z" level=info msg="CreateContainer within sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:21:01.934014 containerd[1489]: time="2025-05-08T00:21:01.933951276Z" level=info msg="CreateContainer within sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a1a4a05a569d293210b1d7bf93043624557ca0a0e9d52600f9b8389484082abb\"" May 8 00:21:01.935470 containerd[1489]: time="2025-05-08T00:21:01.934429426Z" level=info msg="StartContainer for \"a1a4a05a569d293210b1d7bf93043624557ca0a0e9d52600f9b8389484082abb\"" May 8 00:21:01.981013 systemd[1]: Started cri-containerd-a1a4a05a569d293210b1d7bf93043624557ca0a0e9d52600f9b8389484082abb.scope - libcontainer container a1a4a05a569d293210b1d7bf93043624557ca0a0e9d52600f9b8389484082abb. May 8 00:21:02.021403 containerd[1489]: time="2025-05-08T00:21:02.021372467Z" level=info msg="StartContainer for \"a1a4a05a569d293210b1d7bf93043624557ca0a0e9d52600f9b8389484082abb\" returns successfully" May 8 00:21:02.033812 systemd[1]: cri-containerd-a1a4a05a569d293210b1d7bf93043624557ca0a0e9d52600f9b8389484082abb.scope: Deactivated successfully. May 8 00:21:02.080086 containerd[1489]: time="2025-05-08T00:21:02.080028830Z" level=info msg="shim disconnected" id=a1a4a05a569d293210b1d7bf93043624557ca0a0e9d52600f9b8389484082abb namespace=k8s.io May 8 00:21:02.080670 containerd[1489]: time="2025-05-08T00:21:02.080643454Z" level=warning msg="cleaning up after shim disconnected" id=a1a4a05a569d293210b1d7bf93043624557ca0a0e9d52600f9b8389484082abb namespace=k8s.io May 8 00:21:02.080670 containerd[1489]: time="2025-05-08T00:21:02.080660043Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:21:02.096389 containerd[1489]: time="2025-05-08T00:21:02.096362201Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:21:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:21:02.155130 containerd[1489]: time="2025-05-08T00:21:02.155073240Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:02.155880 containerd[1489]: time="2025-05-08T00:21:02.155845026Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 8 00:21:02.156528 containerd[1489]: time="2025-05-08T00:21:02.156509207Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:21:02.158210 containerd[1489]: time="2025-05-08T00:21:02.157845320Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.604791369s" May 8 00:21:02.158210 containerd[1489]: time="2025-05-08T00:21:02.157871928Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:21:02.160341 containerd[1489]: time="2025-05-08T00:21:02.160295378Z" level=info msg="CreateContainer within sandbox \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:21:02.168395 containerd[1489]: time="2025-05-08T00:21:02.168319381Z" level=info msg="CreateContainer within sandbox \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\"" May 8 00:21:02.169077 containerd[1489]: time="2025-05-08T00:21:02.169051858Z" level=info msg="StartContainer for \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\"" May 8 00:21:02.196566 systemd[1]: Started cri-containerd-b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db.scope - libcontainer container b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db. May 8 00:21:02.222240 containerd[1489]: time="2025-05-08T00:21:02.222210580Z" level=info msg="StartContainer for \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\" returns successfully" May 8 00:21:02.568329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1a4a05a569d293210b1d7bf93043624557ca0a0e9d52600f9b8389484082abb-rootfs.mount: Deactivated successfully. May 8 00:21:02.914305 kubelet[2684]: E0508 00:21:02.914254 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:02.919940 containerd[1489]: time="2025-05-08T00:21:02.919798318Z" level=info msg="CreateContainer within sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:21:02.922329 kubelet[2684]: E0508 00:21:02.921942 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:02.938749 containerd[1489]: time="2025-05-08T00:21:02.938621514Z" level=info msg="CreateContainer within sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ed49693295ed6bb85ff443ff6be5a386a0ab5989d3318d073edf49e041232754\"" May 8 00:21:02.943143 containerd[1489]: time="2025-05-08T00:21:02.943122603Z" level=info msg="StartContainer for \"ed49693295ed6bb85ff443ff6be5a386a0ab5989d3318d073edf49e041232754\"" May 8 00:21:02.951826 kubelet[2684]: I0508 00:21:02.951764 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-pltsx" podStartSLOduration=2.174804867 podStartE2EDuration="8.951749922s" podCreationTimestamp="2025-05-08 00:20:54 +0000 UTC" firstStartedPulling="2025-05-08 00:20:55.381912136 +0000 UTC m=+14.626692650" lastFinishedPulling="2025-05-08 00:21:02.158857181 +0000 UTC m=+21.403637705" observedRunningTime="2025-05-08 00:21:02.951410601 +0000 UTC m=+22.196191115" watchObservedRunningTime="2025-05-08 00:21:02.951749922 +0000 UTC m=+22.196530436" May 8 00:21:02.984585 systemd[1]: Started cri-containerd-ed49693295ed6bb85ff443ff6be5a386a0ab5989d3318d073edf49e041232754.scope - libcontainer container ed49693295ed6bb85ff443ff6be5a386a0ab5989d3318d073edf49e041232754. May 8 00:21:03.014391 systemd[1]: cri-containerd-ed49693295ed6bb85ff443ff6be5a386a0ab5989d3318d073edf49e041232754.scope: Deactivated successfully. May 8 00:21:03.015729 containerd[1489]: time="2025-05-08T00:21:03.015490822Z" level=info msg="StartContainer for \"ed49693295ed6bb85ff443ff6be5a386a0ab5989d3318d073edf49e041232754\" returns successfully" May 8 00:21:03.044168 containerd[1489]: time="2025-05-08T00:21:03.043917017Z" level=info msg="shim disconnected" id=ed49693295ed6bb85ff443ff6be5a386a0ab5989d3318d073edf49e041232754 namespace=k8s.io May 8 00:21:03.044168 containerd[1489]: time="2025-05-08T00:21:03.044161034Z" level=warning msg="cleaning up after shim disconnected" id=ed49693295ed6bb85ff443ff6be5a386a0ab5989d3318d073edf49e041232754 namespace=k8s.io May 8 00:21:03.044168 containerd[1489]: time="2025-05-08T00:21:03.044169884Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:21:03.056685 containerd[1489]: time="2025-05-08T00:21:03.056638186Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:21:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:21:03.562978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed49693295ed6bb85ff443ff6be5a386a0ab5989d3318d073edf49e041232754-rootfs.mount: Deactivated successfully. May 8 00:21:03.927512 kubelet[2684]: E0508 00:21:03.926295 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:03.927512 kubelet[2684]: E0508 00:21:03.926353 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:03.931642 containerd[1489]: time="2025-05-08T00:21:03.929763167Z" level=info msg="CreateContainer within sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:21:03.953757 containerd[1489]: time="2025-05-08T00:21:03.953711655Z" level=info msg="CreateContainer within sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f\"" May 8 00:21:03.954177 containerd[1489]: time="2025-05-08T00:21:03.954150462Z" level=info msg="StartContainer for \"eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f\"" May 8 00:21:03.990698 systemd[1]: Started cri-containerd-eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f.scope - libcontainer container eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f. May 8 00:21:04.030645 containerd[1489]: time="2025-05-08T00:21:04.030600821Z" level=info msg="StartContainer for \"eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f\" returns successfully" May 8 00:21:04.163883 kubelet[2684]: I0508 00:21:04.163852 2684 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:21:04.188081 kubelet[2684]: I0508 00:21:04.186868 2684 topology_manager.go:215] "Topology Admit Handler" podUID="49e1f89f-58dd-4726-aa4a-2c47d566aca2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-t28sz" May 8 00:21:04.191340 kubelet[2684]: I0508 00:21:04.190142 2684 topology_manager.go:215] "Topology Admit Handler" podUID="5bbbc257-69b0-444f-9aa6-d3135abdefc2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nlrp5" May 8 00:21:04.200293 systemd[1]: Created slice kubepods-burstable-pod49e1f89f_58dd_4726_aa4a_2c47d566aca2.slice - libcontainer container kubepods-burstable-pod49e1f89f_58dd_4726_aa4a_2c47d566aca2.slice. May 8 00:21:04.202669 kubelet[2684]: I0508 00:21:04.202416 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bbbc257-69b0-444f-9aa6-d3135abdefc2-config-volume\") pod \"coredns-7db6d8ff4d-nlrp5\" (UID: \"5bbbc257-69b0-444f-9aa6-d3135abdefc2\") " pod="kube-system/coredns-7db6d8ff4d-nlrp5" May 8 00:21:04.202669 kubelet[2684]: I0508 00:21:04.202464 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp6nk\" (UniqueName: \"kubernetes.io/projected/5bbbc257-69b0-444f-9aa6-d3135abdefc2-kube-api-access-jp6nk\") pod \"coredns-7db6d8ff4d-nlrp5\" (UID: \"5bbbc257-69b0-444f-9aa6-d3135abdefc2\") " pod="kube-system/coredns-7db6d8ff4d-nlrp5" May 8 00:21:04.202669 kubelet[2684]: I0508 00:21:04.202488 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49e1f89f-58dd-4726-aa4a-2c47d566aca2-config-volume\") pod \"coredns-7db6d8ff4d-t28sz\" (UID: \"49e1f89f-58dd-4726-aa4a-2c47d566aca2\") " pod="kube-system/coredns-7db6d8ff4d-t28sz" May 8 00:21:04.202669 kubelet[2684]: I0508 00:21:04.202506 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v9g2\" (UniqueName: \"kubernetes.io/projected/49e1f89f-58dd-4726-aa4a-2c47d566aca2-kube-api-access-6v9g2\") pod \"coredns-7db6d8ff4d-t28sz\" (UID: \"49e1f89f-58dd-4726-aa4a-2c47d566aca2\") " pod="kube-system/coredns-7db6d8ff4d-t28sz" May 8 00:21:04.209953 systemd[1]: Created slice kubepods-burstable-pod5bbbc257_69b0_444f_9aa6_d3135abdefc2.slice - libcontainer container kubepods-burstable-pod5bbbc257_69b0_444f_9aa6_d3135abdefc2.slice. May 8 00:21:04.506942 kubelet[2684]: E0508 00:21:04.506821 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:04.509606 containerd[1489]: time="2025-05-08T00:21:04.509038359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t28sz,Uid:49e1f89f-58dd-4726-aa4a-2c47d566aca2,Namespace:kube-system,Attempt:0,}" May 8 00:21:04.515034 kubelet[2684]: E0508 00:21:04.515011 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:04.516947 containerd[1489]: time="2025-05-08T00:21:04.516926809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nlrp5,Uid:5bbbc257-69b0-444f-9aa6-d3135abdefc2,Namespace:kube-system,Attempt:0,}" May 8 00:21:04.571013 systemd[1]: run-containerd-runc-k8s.io-eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f-runc.oUJtOK.mount: Deactivated successfully. May 8 00:21:04.931725 kubelet[2684]: E0508 00:21:04.931672 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:04.949137 kubelet[2684]: I0508 00:21:04.948616 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p6shj" podStartSLOduration=5.56020923 podStartE2EDuration="10.948597484s" podCreationTimestamp="2025-05-08 00:20:54 +0000 UTC" firstStartedPulling="2025-05-08 00:20:55.164506786 +0000 UTC m=+14.409287300" lastFinishedPulling="2025-05-08 00:21:00.55289503 +0000 UTC m=+19.797675554" observedRunningTime="2025-05-08 00:21:04.947855112 +0000 UTC m=+24.192635626" watchObservedRunningTime="2025-05-08 00:21:04.948597484 +0000 UTC m=+24.193377998" May 8 00:21:05.932229 kubelet[2684]: E0508 00:21:05.932192 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:06.312192 systemd-networkd[1396]: cilium_host: Link UP May 8 00:21:06.312397 systemd-networkd[1396]: cilium_net: Link UP May 8 00:21:06.313779 systemd-networkd[1396]: cilium_net: Gained carrier May 8 00:21:06.314151 systemd-networkd[1396]: cilium_host: Gained carrier May 8 00:21:06.314297 systemd-networkd[1396]: cilium_net: Gained IPv6LL May 8 00:21:06.317836 systemd-networkd[1396]: cilium_host: Gained IPv6LL May 8 00:21:06.435244 systemd-networkd[1396]: cilium_vxlan: Link UP May 8 00:21:06.435345 systemd-networkd[1396]: cilium_vxlan: Gained carrier May 8 00:21:06.640475 kernel: NET: Registered PF_ALG protocol family May 8 00:21:06.934654 kubelet[2684]: E0508 00:21:06.934396 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:07.280847 systemd-networkd[1396]: lxc_health: Link UP May 8 00:21:07.292274 systemd-networkd[1396]: lxc_health: Gained carrier May 8 00:21:07.603411 systemd-networkd[1396]: lxc5e8ba6e878b5: Link UP May 8 00:21:07.606749 systemd-networkd[1396]: lxcd140de150147: Link UP May 8 00:21:07.610746 kernel: eth0: renamed from tmp618a1 May 8 00:21:07.617749 kernel: eth0: renamed from tmp8b653 May 8 00:21:07.625592 systemd-networkd[1396]: lxc5e8ba6e878b5: Gained carrier May 8 00:21:07.630499 systemd-networkd[1396]: lxcd140de150147: Gained carrier May 8 00:21:08.389616 systemd-networkd[1396]: cilium_vxlan: Gained IPv6LL May 8 00:21:08.837634 systemd-networkd[1396]: lxc5e8ba6e878b5: Gained IPv6LL May 8 00:21:08.966908 systemd-networkd[1396]: lxcd140de150147: Gained IPv6LL May 8 00:21:09.004647 kubelet[2684]: E0508 00:21:09.004490 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:09.286416 systemd-networkd[1396]: lxc_health: Gained IPv6LL May 8 00:21:10.540734 containerd[1489]: time="2025-05-08T00:21:10.540637634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:21:10.541579 containerd[1489]: time="2025-05-08T00:21:10.541018131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:21:10.541713 containerd[1489]: time="2025-05-08T00:21:10.541683178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:21:10.543661 containerd[1489]: time="2025-05-08T00:21:10.543576865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:21:10.568576 systemd[1]: Started cri-containerd-8b6535e77d38ba2d948095481237fa694de43368e38e26cfa530fb12aed6f1c0.scope - libcontainer container 8b6535e77d38ba2d948095481237fa694de43368e38e26cfa530fb12aed6f1c0. May 8 00:21:10.613549 containerd[1489]: time="2025-05-08T00:21:10.613501557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nlrp5,Uid:5bbbc257-69b0-444f-9aa6-d3135abdefc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b6535e77d38ba2d948095481237fa694de43368e38e26cfa530fb12aed6f1c0\"" May 8 00:21:10.615470 kubelet[2684]: E0508 00:21:10.614607 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:10.617992 containerd[1489]: time="2025-05-08T00:21:10.617966566Z" level=info msg="CreateContainer within sandbox \"8b6535e77d38ba2d948095481237fa694de43368e38e26cfa530fb12aed6f1c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:21:10.633746 containerd[1489]: time="2025-05-08T00:21:10.632361509Z" level=info msg="CreateContainer within sandbox \"8b6535e77d38ba2d948095481237fa694de43368e38e26cfa530fb12aed6f1c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80b146e8297628928e9b489a64478a1cd14fa64aaaf72afe3ab0782f9d7e429a\"" May 8 00:21:10.634438 containerd[1489]: time="2025-05-08T00:21:10.634328713Z" level=info msg="StartContainer for \"80b146e8297628928e9b489a64478a1cd14fa64aaaf72afe3ab0782f9d7e429a\"" May 8 00:21:10.672802 systemd[1]: Started cri-containerd-80b146e8297628928e9b489a64478a1cd14fa64aaaf72afe3ab0782f9d7e429a.scope - libcontainer container 80b146e8297628928e9b489a64478a1cd14fa64aaaf72afe3ab0782f9d7e429a. May 8 00:21:10.711835 containerd[1489]: time="2025-05-08T00:21:10.711792669Z" level=info msg="StartContainer for \"80b146e8297628928e9b489a64478a1cd14fa64aaaf72afe3ab0782f9d7e429a\" returns successfully" May 8 00:21:10.748108 containerd[1489]: time="2025-05-08T00:21:10.746851443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:21:10.748108 containerd[1489]: time="2025-05-08T00:21:10.747180071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:21:10.748108 containerd[1489]: time="2025-05-08T00:21:10.747263538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:21:10.748108 containerd[1489]: time="2025-05-08T00:21:10.747515850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:21:10.770792 systemd[1]: Started cri-containerd-618a164f37622927a1e09d93dfb0fa27ebaf16e8c40a4eed31b4ea72303f5004.scope - libcontainer container 618a164f37622927a1e09d93dfb0fa27ebaf16e8c40a4eed31b4ea72303f5004. May 8 00:21:10.811173 containerd[1489]: time="2025-05-08T00:21:10.811065759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-t28sz,Uid:49e1f89f-58dd-4726-aa4a-2c47d566aca2,Namespace:kube-system,Attempt:0,} returns sandbox id \"618a164f37622927a1e09d93dfb0fa27ebaf16e8c40a4eed31b4ea72303f5004\"" May 8 00:21:10.812148 kubelet[2684]: E0508 00:21:10.812044 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:10.815866 containerd[1489]: time="2025-05-08T00:21:10.815823957Z" level=info msg="CreateContainer within sandbox \"618a164f37622927a1e09d93dfb0fa27ebaf16e8c40a4eed31b4ea72303f5004\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:21:10.830151 containerd[1489]: time="2025-05-08T00:21:10.830106744Z" level=info msg="CreateContainer within sandbox \"618a164f37622927a1e09d93dfb0fa27ebaf16e8c40a4eed31b4ea72303f5004\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4581a1afe922bb3331ec1bc1d861d25da6ab91528a2abd114c8fb782a95bbe15\"" May 8 00:21:10.830750 containerd[1489]: time="2025-05-08T00:21:10.830727243Z" level=info msg="StartContainer for \"4581a1afe922bb3331ec1bc1d861d25da6ab91528a2abd114c8fb782a95bbe15\"" May 8 00:21:10.858595 systemd[1]: Started cri-containerd-4581a1afe922bb3331ec1bc1d861d25da6ab91528a2abd114c8fb782a95bbe15.scope - libcontainer container 4581a1afe922bb3331ec1bc1d861d25da6ab91528a2abd114c8fb782a95bbe15. May 8 00:21:10.883788 containerd[1489]: time="2025-05-08T00:21:10.883679630Z" level=info msg="StartContainer for \"4581a1afe922bb3331ec1bc1d861d25da6ab91528a2abd114c8fb782a95bbe15\" returns successfully" May 8 00:21:10.943397 kubelet[2684]: E0508 00:21:10.943009 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:10.945594 kubelet[2684]: E0508 00:21:10.945557 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:10.954060 kubelet[2684]: I0508 00:21:10.953627 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nlrp5" podStartSLOduration=16.953613642 podStartE2EDuration="16.953613642s" podCreationTimestamp="2025-05-08 00:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:21:10.951958309 +0000 UTC m=+30.196738823" watchObservedRunningTime="2025-05-08 00:21:10.953613642 +0000 UTC m=+30.198394156" May 8 00:21:10.965011 kubelet[2684]: I0508 00:21:10.964867 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-t28sz" podStartSLOduration=16.964858631 podStartE2EDuration="16.964858631s" podCreationTimestamp="2025-05-08 00:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:21:10.964136176 +0000 UTC m=+30.208916690" watchObservedRunningTime="2025-05-08 00:21:10.964858631 +0000 UTC m=+30.209639145" May 8 00:21:11.221186 kubelet[2684]: I0508 00:21:11.220978 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:21:11.224239 kubelet[2684]: E0508 00:21:11.223343 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:11.947806 kubelet[2684]: E0508 00:21:11.947646 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:11.947806 kubelet[2684]: E0508 00:21:11.947693 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:12.948569 kubelet[2684]: E0508 00:21:12.948537 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:14.508500 kubelet[2684]: E0508 00:21:14.508414 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:14.951978 kubelet[2684]: E0508 00:21:14.951948 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:21:51.856327 kubelet[2684]: E0508 00:21:51.856231 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:22:04.857417 kubelet[2684]: E0508 00:22:04.856785 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:22:11.856336 kubelet[2684]: E0508 00:22:11.856293 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:22:14.857635 kubelet[2684]: E0508 00:22:14.856898 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:22:15.856240 kubelet[2684]: E0508 00:22:15.856182 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:22:17.856561 kubelet[2684]: E0508 00:22:17.856348 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:22:17.856561 kubelet[2684]: E0508 00:22:17.856488 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:22:20.856485 kubelet[2684]: E0508 00:22:20.856270 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:22:50.079312 systemd[1]: Started sshd@7-172.232.0.241:22-139.178.89.65:36776.service - OpenSSH per-connection server daemon (139.178.89.65:36776). May 8 00:22:50.417056 sshd[4083]: Accepted publickey for core from 139.178.89.65 port 36776 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:22:50.419008 sshd-session[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:50.424557 systemd-logind[1466]: New session 8 of user core. May 8 00:22:50.429565 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:22:50.734334 sshd[4085]: Connection closed by 139.178.89.65 port 36776 May 8 00:22:50.735016 sshd-session[4083]: pam_unix(sshd:session): session closed for user core May 8 00:22:50.738809 systemd[1]: sshd@7-172.232.0.241:22-139.178.89.65:36776.service: Deactivated successfully. May 8 00:22:50.741101 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:22:50.743070 systemd-logind[1466]: Session 8 logged out. Waiting for processes to exit. May 8 00:22:50.744529 systemd-logind[1466]: Removed session 8. May 8 00:22:55.806643 systemd[1]: Started sshd@8-172.232.0.241:22-139.178.89.65:36782.service - OpenSSH per-connection server daemon (139.178.89.65:36782). May 8 00:22:56.142936 sshd[4100]: Accepted publickey for core from 139.178.89.65 port 36782 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:22:56.144775 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:22:56.150271 systemd-logind[1466]: New session 9 of user core. May 8 00:22:56.157668 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:22:56.449775 sshd[4102]: Connection closed by 139.178.89.65 port 36782 May 8 00:22:56.450654 sshd-session[4100]: pam_unix(sshd:session): session closed for user core May 8 00:22:56.454021 systemd[1]: sshd@8-172.232.0.241:22-139.178.89.65:36782.service: Deactivated successfully. May 8 00:22:56.456401 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:22:56.458392 systemd-logind[1466]: Session 9 logged out. Waiting for processes to exit. May 8 00:22:56.459665 systemd-logind[1466]: Removed session 9. May 8 00:22:57.855915 kubelet[2684]: E0508 00:22:57.855877 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:01.279892 update_engine[1468]: I20250508 00:23:01.279801 1468 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 8 00:23:01.279892 update_engine[1468]: I20250508 00:23:01.279881 1468 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 8 00:23:01.280306 update_engine[1468]: I20250508 00:23:01.280107 1468 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 8 00:23:01.280857 update_engine[1468]: I20250508 00:23:01.280826 1468 omaha_request_params.cc:62] Current group set to beta May 8 00:23:01.281091 update_engine[1468]: I20250508 00:23:01.280962 1468 update_attempter.cc:499] Already updated boot flags. Skipping. May 8 00:23:01.281091 update_engine[1468]: I20250508 00:23:01.280980 1468 update_attempter.cc:643] Scheduling an action processor start. May 8 00:23:01.281091 update_engine[1468]: I20250508 00:23:01.281000 1468 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 00:23:01.281381 update_engine[1468]: I20250508 00:23:01.281353 1468 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 8 00:23:01.281414 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 8 00:23:01.281626 update_engine[1468]: I20250508 00:23:01.281478 1468 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 00:23:01.281626 update_engine[1468]: I20250508 00:23:01.281493 1468 omaha_request_action.cc:272] Request: May 8 00:23:01.281626 update_engine[1468]: May 8 00:23:01.281626 update_engine[1468]: May 8 00:23:01.281626 update_engine[1468]: May 8 00:23:01.281626 update_engine[1468]: May 8 00:23:01.281626 update_engine[1468]: May 8 00:23:01.281626 update_engine[1468]: May 8 00:23:01.281626 update_engine[1468]: May 8 00:23:01.281626 update_engine[1468]: May 8 00:23:01.281626 update_engine[1468]: I20250508 00:23:01.281503 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:23:01.283107 update_engine[1468]: I20250508 00:23:01.283077 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:23:01.283433 update_engine[1468]: I20250508 00:23:01.283402 1468 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:23:01.302965 update_engine[1468]: E20250508 00:23:01.302882 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:23:01.303059 update_engine[1468]: I20250508 00:23:01.303017 1468 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 8 00:23:01.518689 systemd[1]: Started sshd@9-172.232.0.241:22-139.178.89.65:45562.service - OpenSSH per-connection server daemon (139.178.89.65:45562). May 8 00:23:01.855661 sshd[4115]: Accepted publickey for core from 139.178.89.65 port 45562 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:01.857581 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:01.862700 systemd-logind[1466]: New session 10 of user core. May 8 00:23:01.867577 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:23:02.172859 sshd[4117]: Connection closed by 139.178.89.65 port 45562 May 8 00:23:02.173593 sshd-session[4115]: pam_unix(sshd:session): session closed for user core May 8 00:23:02.178364 systemd[1]: sshd@9-172.232.0.241:22-139.178.89.65:45562.service: Deactivated successfully. May 8 00:23:02.182353 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:23:02.184743 systemd-logind[1466]: Session 10 logged out. Waiting for processes to exit. May 8 00:23:02.186277 systemd-logind[1466]: Removed session 10. May 8 00:23:02.237657 systemd[1]: Started sshd@10-172.232.0.241:22-139.178.89.65:45578.service - OpenSSH per-connection server daemon (139.178.89.65:45578). May 8 00:23:02.572409 sshd[4130]: Accepted publickey for core from 139.178.89.65 port 45578 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:02.574177 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:02.579894 systemd-logind[1466]: New session 11 of user core. May 8 00:23:02.588677 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:23:02.915398 sshd[4132]: Connection closed by 139.178.89.65 port 45578 May 8 00:23:02.916055 sshd-session[4130]: pam_unix(sshd:session): session closed for user core May 8 00:23:02.920161 systemd[1]: sshd@10-172.232.0.241:22-139.178.89.65:45578.service: Deactivated successfully. May 8 00:23:02.922928 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:23:02.924031 systemd-logind[1466]: Session 11 logged out. Waiting for processes to exit. May 8 00:23:02.925167 systemd-logind[1466]: Removed session 11. May 8 00:23:02.983988 systemd[1]: Started sshd@11-172.232.0.241:22-139.178.89.65:45582.service - OpenSSH per-connection server daemon (139.178.89.65:45582). May 8 00:23:03.319063 sshd[4142]: Accepted publickey for core from 139.178.89.65 port 45582 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:03.320703 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:03.325513 systemd-logind[1466]: New session 12 of user core. May 8 00:23:03.330576 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:23:03.621815 sshd[4144]: Connection closed by 139.178.89.65 port 45582 May 8 00:23:03.622440 sshd-session[4142]: pam_unix(sshd:session): session closed for user core May 8 00:23:03.626127 systemd-logind[1466]: Session 12 logged out. Waiting for processes to exit. May 8 00:23:03.627059 systemd[1]: sshd@11-172.232.0.241:22-139.178.89.65:45582.service: Deactivated successfully. May 8 00:23:03.629213 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:23:03.630250 systemd-logind[1466]: Removed session 12. May 8 00:23:08.679298 systemd[1]: Started sshd@12-172.232.0.241:22-139.178.89.65:49624.service - OpenSSH per-connection server daemon (139.178.89.65:49624). May 8 00:23:09.011110 sshd[4156]: Accepted publickey for core from 139.178.89.65 port 49624 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:09.012579 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:09.017180 systemd-logind[1466]: New session 13 of user core. May 8 00:23:09.022577 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:23:09.306658 sshd[4158]: Connection closed by 139.178.89.65 port 49624 May 8 00:23:09.307470 sshd-session[4156]: pam_unix(sshd:session): session closed for user core May 8 00:23:09.311417 systemd-logind[1466]: Session 13 logged out. Waiting for processes to exit. May 8 00:23:09.312216 systemd[1]: sshd@12-172.232.0.241:22-139.178.89.65:49624.service: Deactivated successfully. May 8 00:23:09.314789 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:23:09.315732 systemd-logind[1466]: Removed session 13. May 8 00:23:11.276351 update_engine[1468]: I20250508 00:23:11.276281 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:23:11.276762 update_engine[1468]: I20250508 00:23:11.276611 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:23:11.276903 update_engine[1468]: I20250508 00:23:11.276847 1468 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:23:11.277651 update_engine[1468]: E20250508 00:23:11.277616 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:23:11.277691 update_engine[1468]: I20250508 00:23:11.277670 1468 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 8 00:23:14.370360 systemd[1]: Started sshd@13-172.232.0.241:22-139.178.89.65:49626.service - OpenSSH per-connection server daemon (139.178.89.65:49626). May 8 00:23:14.703888 sshd[4170]: Accepted publickey for core from 139.178.89.65 port 49626 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:14.705285 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:14.710082 systemd-logind[1466]: New session 14 of user core. May 8 00:23:14.715623 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:23:15.007891 sshd[4172]: Connection closed by 139.178.89.65 port 49626 May 8 00:23:15.008808 sshd-session[4170]: pam_unix(sshd:session): session closed for user core May 8 00:23:15.013019 systemd[1]: sshd@13-172.232.0.241:22-139.178.89.65:49626.service: Deactivated successfully. May 8 00:23:15.015182 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:23:15.017482 systemd-logind[1466]: Session 14 logged out. Waiting for processes to exit. May 8 00:23:15.018782 systemd-logind[1466]: Removed session 14. May 8 00:23:15.073846 systemd[1]: Started sshd@14-172.232.0.241:22-139.178.89.65:49634.service - OpenSSH per-connection server daemon (139.178.89.65:49634). May 8 00:23:15.408796 sshd[4184]: Accepted publickey for core from 139.178.89.65 port 49634 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:15.410290 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:15.414762 systemd-logind[1466]: New session 15 of user core. May 8 00:23:15.421585 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:23:15.739703 sshd[4186]: Connection closed by 139.178.89.65 port 49634 May 8 00:23:15.740530 sshd-session[4184]: pam_unix(sshd:session): session closed for user core May 8 00:23:15.745568 systemd[1]: sshd@14-172.232.0.241:22-139.178.89.65:49634.service: Deactivated successfully. May 8 00:23:15.748556 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:23:15.749402 systemd-logind[1466]: Session 15 logged out. Waiting for processes to exit. May 8 00:23:15.750814 systemd-logind[1466]: Removed session 15. May 8 00:23:15.801757 systemd[1]: Started sshd@15-172.232.0.241:22-139.178.89.65:49644.service - OpenSSH per-connection server daemon (139.178.89.65:49644). May 8 00:23:16.143247 sshd[4196]: Accepted publickey for core from 139.178.89.65 port 49644 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:16.145017 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:16.150185 systemd-logind[1466]: New session 16 of user core. May 8 00:23:16.157568 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:23:17.489801 sshd[4198]: Connection closed by 139.178.89.65 port 49644 May 8 00:23:17.490537 sshd-session[4196]: pam_unix(sshd:session): session closed for user core May 8 00:23:17.493835 systemd[1]: sshd@15-172.232.0.241:22-139.178.89.65:49644.service: Deactivated successfully. May 8 00:23:17.495745 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:23:17.498083 systemd-logind[1466]: Session 16 logged out. Waiting for processes to exit. May 8 00:23:17.499317 systemd-logind[1466]: Removed session 16. May 8 00:23:17.558082 systemd[1]: Started sshd@16-172.232.0.241:22-139.178.89.65:60468.service - OpenSSH per-connection server daemon (139.178.89.65:60468). May 8 00:23:17.893748 sshd[4215]: Accepted publickey for core from 139.178.89.65 port 60468 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:17.895281 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:17.900164 systemd-logind[1466]: New session 17 of user core. May 8 00:23:17.906638 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:23:18.306995 sshd[4217]: Connection closed by 139.178.89.65 port 60468 May 8 00:23:18.307747 sshd-session[4215]: pam_unix(sshd:session): session closed for user core May 8 00:23:18.312287 systemd-logind[1466]: Session 17 logged out. Waiting for processes to exit. May 8 00:23:18.313380 systemd[1]: sshd@16-172.232.0.241:22-139.178.89.65:60468.service: Deactivated successfully. May 8 00:23:18.315887 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:23:18.317012 systemd-logind[1466]: Removed session 17. May 8 00:23:18.368916 systemd[1]: Started sshd@17-172.232.0.241:22-139.178.89.65:60478.service - OpenSSH per-connection server daemon (139.178.89.65:60478). May 8 00:23:18.706287 sshd[4227]: Accepted publickey for core from 139.178.89.65 port 60478 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:18.708023 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:18.713158 systemd-logind[1466]: New session 18 of user core. May 8 00:23:18.717562 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:23:19.007721 sshd[4229]: Connection closed by 139.178.89.65 port 60478 May 8 00:23:19.008501 sshd-session[4227]: pam_unix(sshd:session): session closed for user core May 8 00:23:19.012506 systemd-logind[1466]: Session 18 logged out. Waiting for processes to exit. May 8 00:23:19.013708 systemd[1]: sshd@17-172.232.0.241:22-139.178.89.65:60478.service: Deactivated successfully. May 8 00:23:19.015872 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:23:19.017763 systemd-logind[1466]: Removed session 18. May 8 00:23:19.856077 kubelet[2684]: E0508 00:23:19.856041 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:21.275977 update_engine[1468]: I20250508 00:23:21.275894 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:23:21.276346 update_engine[1468]: I20250508 00:23:21.276158 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:23:21.276373 update_engine[1468]: I20250508 00:23:21.276358 1468 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:23:21.277429 update_engine[1468]: E20250508 00:23:21.277367 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:23:21.277515 update_engine[1468]: I20250508 00:23:21.277485 1468 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 8 00:23:22.857086 kubelet[2684]: E0508 00:23:22.856123 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:24.073651 systemd[1]: Started sshd@18-172.232.0.241:22-139.178.89.65:60492.service - OpenSSH per-connection server daemon (139.178.89.65:60492). May 8 00:23:24.400916 sshd[4245]: Accepted publickey for core from 139.178.89.65 port 60492 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:24.402441 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:24.406844 systemd-logind[1466]: New session 19 of user core. May 8 00:23:24.413582 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:23:24.702472 sshd[4247]: Connection closed by 139.178.89.65 port 60492 May 8 00:23:24.703141 sshd-session[4245]: pam_unix(sshd:session): session closed for user core May 8 00:23:24.707562 systemd[1]: sshd@18-172.232.0.241:22-139.178.89.65:60492.service: Deactivated successfully. May 8 00:23:24.710068 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:23:24.710992 systemd-logind[1466]: Session 19 logged out. Waiting for processes to exit. May 8 00:23:24.712873 systemd-logind[1466]: Removed session 19. May 8 00:23:25.856351 kubelet[2684]: E0508 00:23:25.856315 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:29.769655 systemd[1]: Started sshd@19-172.232.0.241:22-139.178.89.65:57350.service - OpenSSH per-connection server daemon (139.178.89.65:57350). May 8 00:23:29.856814 kubelet[2684]: E0508 00:23:29.856783 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:30.093893 sshd[4261]: Accepted publickey for core from 139.178.89.65 port 57350 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:30.095674 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:30.101467 systemd-logind[1466]: New session 20 of user core. May 8 00:23:30.110574 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:23:30.393101 sshd[4263]: Connection closed by 139.178.89.65 port 57350 May 8 00:23:30.393919 sshd-session[4261]: pam_unix(sshd:session): session closed for user core May 8 00:23:30.398573 systemd[1]: sshd@19-172.232.0.241:22-139.178.89.65:57350.service: Deactivated successfully. May 8 00:23:30.401425 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:23:30.402431 systemd-logind[1466]: Session 20 logged out. Waiting for processes to exit. May 8 00:23:30.403394 systemd-logind[1466]: Removed session 20. May 8 00:23:31.276545 update_engine[1468]: I20250508 00:23:31.276489 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:23:31.277180 update_engine[1468]: I20250508 00:23:31.277133 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:23:31.277438 update_engine[1468]: I20250508 00:23:31.277386 1468 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:23:31.278106 update_engine[1468]: E20250508 00:23:31.278069 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:23:31.278159 update_engine[1468]: I20250508 00:23:31.278122 1468 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 00:23:31.278159 update_engine[1468]: I20250508 00:23:31.278133 1468 omaha_request_action.cc:617] Omaha request response: May 8 00:23:31.278236 update_engine[1468]: E20250508 00:23:31.278204 1468 omaha_request_action.cc:636] Omaha request network transfer failed. May 8 00:23:31.278267 update_engine[1468]: I20250508 00:23:31.278241 1468 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 8 00:23:31.278267 update_engine[1468]: I20250508 00:23:31.278249 1468 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:23:31.278267 update_engine[1468]: I20250508 00:23:31.278255 1468 update_attempter.cc:306] Processing Done. May 8 00:23:31.278336 update_engine[1468]: E20250508 00:23:31.278271 1468 update_attempter.cc:619] Update failed. May 8 00:23:31.278336 update_engine[1468]: I20250508 00:23:31.278278 1468 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 8 00:23:31.278336 update_engine[1468]: I20250508 00:23:31.278285 1468 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 8 00:23:31.278336 update_engine[1468]: I20250508 00:23:31.278291 1468 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 8 00:23:31.278418 update_engine[1468]: I20250508 00:23:31.278360 1468 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 00:23:31.278418 update_engine[1468]: I20250508 00:23:31.278386 1468 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 00:23:31.278418 update_engine[1468]: I20250508 00:23:31.278393 1468 omaha_request_action.cc:272] Request: May 8 00:23:31.278418 update_engine[1468]: May 8 00:23:31.278418 update_engine[1468]: May 8 00:23:31.278418 update_engine[1468]: May 8 00:23:31.278418 update_engine[1468]: May 8 00:23:31.278418 update_engine[1468]: May 8 00:23:31.278418 update_engine[1468]: May 8 00:23:31.278418 update_engine[1468]: I20250508 00:23:31.278401 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 00:23:31.278627 update_engine[1468]: I20250508 00:23:31.278573 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 00:23:31.278778 update_engine[1468]: I20250508 00:23:31.278743 1468 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 00:23:31.279165 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 8 00:23:31.279507 update_engine[1468]: E20250508 00:23:31.279438 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 00:23:31.279574 update_engine[1468]: I20250508 00:23:31.279551 1468 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 00:23:31.279574 update_engine[1468]: I20250508 00:23:31.279567 1468 omaha_request_action.cc:617] Omaha request response: May 8 00:23:31.279632 update_engine[1468]: I20250508 00:23:31.279578 1468 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:23:31.279632 update_engine[1468]: I20250508 00:23:31.279585 1468 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 00:23:31.279632 update_engine[1468]: I20250508 00:23:31.279592 1468 update_attempter.cc:306] Processing Done. May 8 00:23:31.279632 update_engine[1468]: I20250508 00:23:31.279598 1468 update_attempter.cc:310] Error event sent. May 8 00:23:31.279632 update_engine[1468]: I20250508 00:23:31.279610 1468 update_check_scheduler.cc:74] Next update check in 44m31s May 8 00:23:31.279905 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 8 00:23:34.856829 kubelet[2684]: E0508 00:23:34.856221 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:35.461651 systemd[1]: Started sshd@20-172.232.0.241:22-139.178.89.65:57362.service - OpenSSH per-connection server daemon (139.178.89.65:57362). May 8 00:23:35.799589 sshd[4276]: Accepted publickey for core from 139.178.89.65 port 57362 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:35.800997 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:35.806045 systemd-logind[1466]: New session 21 of user core. May 8 00:23:35.812564 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:23:36.103752 sshd[4278]: Connection closed by 139.178.89.65 port 57362 May 8 00:23:36.104314 sshd-session[4276]: pam_unix(sshd:session): session closed for user core May 8 00:23:36.108838 systemd-logind[1466]: Session 21 logged out. Waiting for processes to exit. May 8 00:23:36.109930 systemd[1]: sshd@20-172.232.0.241:22-139.178.89.65:57362.service: Deactivated successfully. May 8 00:23:36.112074 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:23:36.112968 systemd-logind[1466]: Removed session 21. May 8 00:23:36.165643 systemd[1]: Started sshd@21-172.232.0.241:22-139.178.89.65:57366.service - OpenSSH per-connection server daemon (139.178.89.65:57366). May 8 00:23:36.491595 sshd[4290]: Accepted publickey for core from 139.178.89.65 port 57366 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:36.493164 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:36.497269 systemd-logind[1466]: New session 22 of user core. May 8 00:23:36.500561 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:23:37.962997 containerd[1489]: time="2025-05-08T00:23:37.962739166Z" level=info msg="StopContainer for \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\" with timeout 30 (s)" May 8 00:23:37.964262 containerd[1489]: time="2025-05-08T00:23:37.963941859Z" level=info msg="Stop container \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\" with signal terminated" May 8 00:23:37.982116 systemd[1]: cri-containerd-b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db.scope: Deactivated successfully. May 8 00:23:38.006493 containerd[1489]: time="2025-05-08T00:23:38.006390156Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:23:38.022958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db-rootfs.mount: Deactivated successfully. May 8 00:23:38.024550 containerd[1489]: time="2025-05-08T00:23:38.024459515Z" level=info msg="StopContainer for \"eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f\" with timeout 2 (s)" May 8 00:23:38.025170 containerd[1489]: time="2025-05-08T00:23:38.025103611Z" level=info msg="Stop container \"eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f\" with signal terminated" May 8 00:23:38.033149 systemd-networkd[1396]: lxc_health: Link DOWN May 8 00:23:38.033158 systemd-networkd[1396]: lxc_health: Lost carrier May 8 00:23:38.036592 containerd[1489]: time="2025-05-08T00:23:38.035332108Z" level=info msg="shim disconnected" id=b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db namespace=k8s.io May 8 00:23:38.036592 containerd[1489]: time="2025-05-08T00:23:38.035379057Z" level=warning msg="cleaning up after shim disconnected" id=b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db namespace=k8s.io May 8 00:23:38.036592 containerd[1489]: time="2025-05-08T00:23:38.035387997Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:23:38.051152 systemd[1]: cri-containerd-eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f.scope: Deactivated successfully. May 8 00:23:38.051898 systemd[1]: cri-containerd-eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f.scope: Consumed 6.443s CPU time, 126.3M memory peak, 136K read from disk, 13.3M written to disk. May 8 00:23:38.067656 containerd[1489]: time="2025-05-08T00:23:38.067523350Z" level=info msg="StopContainer for \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\" returns successfully" May 8 00:23:38.068534 containerd[1489]: time="2025-05-08T00:23:38.068140856Z" level=info msg="StopPodSandbox for \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\"" May 8 00:23:38.068534 containerd[1489]: time="2025-05-08T00:23:38.068184456Z" level=info msg="Container to stop \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:23:38.072966 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9-shm.mount: Deactivated successfully. May 8 00:23:38.082373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f-rootfs.mount: Deactivated successfully. May 8 00:23:38.086662 containerd[1489]: time="2025-05-08T00:23:38.086430964Z" level=info msg="shim disconnected" id=eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f namespace=k8s.io May 8 00:23:38.086662 containerd[1489]: time="2025-05-08T00:23:38.086652292Z" level=warning msg="cleaning up after shim disconnected" id=eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f namespace=k8s.io May 8 00:23:38.086662 containerd[1489]: time="2025-05-08T00:23:38.086662322Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:23:38.086888 systemd[1]: cri-containerd-ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9.scope: Deactivated successfully. May 8 00:23:38.118857 containerd[1489]: time="2025-05-08T00:23:38.118832664Z" level=info msg="StopContainer for \"eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f\" returns successfully" May 8 00:23:38.119414 containerd[1489]: time="2025-05-08T00:23:38.119381591Z" level=info msg="StopPodSandbox for \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\"" May 8 00:23:38.119732 containerd[1489]: time="2025-05-08T00:23:38.119434911Z" level=info msg="Container to stop \"ed49693295ed6bb85ff443ff6be5a386a0ab5989d3318d073edf49e041232754\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:23:38.119790 containerd[1489]: time="2025-05-08T00:23:38.119751639Z" level=info msg="Container to stop \"eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:23:38.119790 containerd[1489]: time="2025-05-08T00:23:38.119764889Z" level=info msg="Container to stop \"4557c1cf4e9ee72e68a6bbe239c824b1f2775c36b61be35bbf9dd51b16fd6746\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:23:38.119790 containerd[1489]: time="2025-05-08T00:23:38.119775988Z" level=info msg="Container to stop \"2aaf81a9bec3d4fe4621249978bf0de9d4d0fb6147674968423c8f667f3e71a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:23:38.119790 containerd[1489]: time="2025-05-08T00:23:38.119783828Z" level=info msg="Container to stop \"a1a4a05a569d293210b1d7bf93043624557ca0a0e9d52600f9b8389484082abb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:23:38.122088 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0-shm.mount: Deactivated successfully. May 8 00:23:38.134093 systemd[1]: cri-containerd-45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0.scope: Deactivated successfully. May 8 00:23:38.138499 containerd[1489]: time="2025-05-08T00:23:38.137388840Z" level=info msg="shim disconnected" id=ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9 namespace=k8s.io May 8 00:23:38.138499 containerd[1489]: time="2025-05-08T00:23:38.137430501Z" level=warning msg="cleaning up after shim disconnected" id=ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9 namespace=k8s.io May 8 00:23:38.138499 containerd[1489]: time="2025-05-08T00:23:38.137438591Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:23:38.156694 containerd[1489]: time="2025-05-08T00:23:38.156668412Z" level=info msg="TearDown network for sandbox \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\" successfully" May 8 00:23:38.156780 containerd[1489]: time="2025-05-08T00:23:38.156766511Z" level=info msg="StopPodSandbox for \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\" returns successfully" May 8 00:23:38.170708 containerd[1489]: time="2025-05-08T00:23:38.170556287Z" level=info msg="shim disconnected" id=45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0 namespace=k8s.io May 8 00:23:38.170905 containerd[1489]: time="2025-05-08T00:23:38.170887194Z" level=warning msg="cleaning up after shim disconnected" id=45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0 namespace=k8s.io May 8 00:23:38.170967 containerd[1489]: time="2025-05-08T00:23:38.170954115Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:23:38.183046 kubelet[2684]: I0508 00:23:38.183026 2684 scope.go:117] "RemoveContainer" containerID="b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db" May 8 00:23:38.185376 containerd[1489]: time="2025-05-08T00:23:38.185345755Z" level=info msg="RemoveContainer for \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\"" May 8 00:23:38.191271 containerd[1489]: time="2025-05-08T00:23:38.191251749Z" level=info msg="RemoveContainer for \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\" returns successfully" May 8 00:23:38.191898 kubelet[2684]: I0508 00:23:38.191882 2684 scope.go:117] "RemoveContainer" containerID="b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db" May 8 00:23:38.192132 containerd[1489]: time="2025-05-08T00:23:38.192082564Z" level=error msg="ContainerStatus for \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\": not found" May 8 00:23:38.192285 kubelet[2684]: E0508 00:23:38.192267 2684 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\": not found" containerID="b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db" May 8 00:23:38.192420 kubelet[2684]: I0508 00:23:38.192359 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db"} err="failed to get container status \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1c6b358883bb92bb032b9686d052ea57a45f3043e970ff8dd8503a8b3bf77db\": not found" May 8 00:23:38.196888 containerd[1489]: time="2025-05-08T00:23:38.196617586Z" level=info msg="TearDown network for sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" successfully" May 8 00:23:38.196888 containerd[1489]: time="2025-05-08T00:23:38.196635396Z" level=info msg="StopPodSandbox for \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" returns successfully" May 8 00:23:38.262743 kubelet[2684]: I0508 00:23:38.262658 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-cilium-run\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264044 kubelet[2684]: I0508 00:23:38.262865 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-hostproc\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264044 kubelet[2684]: I0508 00:23:38.262890 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-lib-modules\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264044 kubelet[2684]: I0508 00:23:38.262911 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b90ad5d5-9c73-492d-ac40-0214d42fee56-cilium-config-path\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264044 kubelet[2684]: I0508 00:23:38.262928 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksh2r\" (UniqueName: \"kubernetes.io/projected/b90ad5d5-9c73-492d-ac40-0214d42fee56-kube-api-access-ksh2r\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264044 kubelet[2684]: I0508 00:23:38.262946 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-host-proc-sys-net\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264044 kubelet[2684]: I0508 00:23:38.262962 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-cni-path\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264216 kubelet[2684]: I0508 00:23:38.262977 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b90ad5d5-9c73-492d-ac40-0214d42fee56-clustermesh-secrets\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264216 kubelet[2684]: I0508 00:23:38.262992 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b90ad5d5-9c73-492d-ac40-0214d42fee56-hubble-tls\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264216 kubelet[2684]: I0508 00:23:38.263007 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fd34716-2694-4828-81df-ef45891c2edc-cilium-config-path\") pod \"6fd34716-2694-4828-81df-ef45891c2edc\" (UID: \"6fd34716-2694-4828-81df-ef45891c2edc\") " May 8 00:23:38.264216 kubelet[2684]: I0508 00:23:38.263022 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzplz\" (UniqueName: \"kubernetes.io/projected/6fd34716-2694-4828-81df-ef45891c2edc-kube-api-access-dzplz\") pod \"6fd34716-2694-4828-81df-ef45891c2edc\" (UID: \"6fd34716-2694-4828-81df-ef45891c2edc\") " May 8 00:23:38.264216 kubelet[2684]: I0508 00:23:38.263034 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-bpf-maps\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264216 kubelet[2684]: I0508 00:23:38.263048 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-etc-cni-netd\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264359 kubelet[2684]: I0508 00:23:38.263062 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-cilium-cgroup\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264359 kubelet[2684]: I0508 00:23:38.263077 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-host-proc-sys-kernel\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264359 kubelet[2684]: I0508 00:23:38.263091 2684 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-xtables-lock\") pod \"b90ad5d5-9c73-492d-ac40-0214d42fee56\" (UID: \"b90ad5d5-9c73-492d-ac40-0214d42fee56\") " May 8 00:23:38.264359 kubelet[2684]: I0508 00:23:38.263128 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:23:38.264359 kubelet[2684]: I0508 00:23:38.263196 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-hostproc" (OuterVolumeSpecName: "hostproc") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:23:38.264499 kubelet[2684]: I0508 00:23:38.263213 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:23:38.264499 kubelet[2684]: I0508 00:23:38.262707 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:23:38.267213 kubelet[2684]: I0508 00:23:38.267194 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b90ad5d5-9c73-492d-ac40-0214d42fee56-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:23:38.267890 kubelet[2684]: I0508 00:23:38.267836 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd34716-2694-4828-81df-ef45891c2edc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6fd34716-2694-4828-81df-ef45891c2edc" (UID: "6fd34716-2694-4828-81df-ef45891c2edc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:23:38.268142 kubelet[2684]: I0508 00:23:38.268114 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:23:38.268185 kubelet[2684]: I0508 00:23:38.268147 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:23:38.268216 kubelet[2684]: I0508 00:23:38.268188 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:23:38.268216 kubelet[2684]: I0508 00:23:38.268204 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:23:38.268269 kubelet[2684]: I0508 00:23:38.268220 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-cni-path" (OuterVolumeSpecName: "cni-path") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:23:38.268269 kubelet[2684]: I0508 00:23:38.268236 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:23:38.272497 kubelet[2684]: I0508 00:23:38.272475 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd34716-2694-4828-81df-ef45891c2edc-kube-api-access-dzplz" (OuterVolumeSpecName: "kube-api-access-dzplz") pod "6fd34716-2694-4828-81df-ef45891c2edc" (UID: "6fd34716-2694-4828-81df-ef45891c2edc"). InnerVolumeSpecName "kube-api-access-dzplz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:23:38.273045 kubelet[2684]: I0508 00:23:38.273028 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b90ad5d5-9c73-492d-ac40-0214d42fee56-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:23:38.273926 kubelet[2684]: I0508 00:23:38.273898 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b90ad5d5-9c73-492d-ac40-0214d42fee56-kube-api-access-ksh2r" (OuterVolumeSpecName: "kube-api-access-ksh2r") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "kube-api-access-ksh2r". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:23:38.274477 kubelet[2684]: I0508 00:23:38.274406 2684 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b90ad5d5-9c73-492d-ac40-0214d42fee56-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b90ad5d5-9c73-492d-ac40-0214d42fee56" (UID: "b90ad5d5-9c73-492d-ac40-0214d42fee56"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:23:38.363472 kubelet[2684]: I0508 00:23:38.363410 2684 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-hostproc\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363472 kubelet[2684]: I0508 00:23:38.363472 2684 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-lib-modules\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363588 kubelet[2684]: I0508 00:23:38.363485 2684 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b90ad5d5-9c73-492d-ac40-0214d42fee56-cilium-config-path\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363588 kubelet[2684]: I0508 00:23:38.363502 2684 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ksh2r\" (UniqueName: \"kubernetes.io/projected/b90ad5d5-9c73-492d-ac40-0214d42fee56-kube-api-access-ksh2r\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363588 kubelet[2684]: I0508 00:23:38.363511 2684 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-host-proc-sys-net\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363588 kubelet[2684]: I0508 00:23:38.363519 2684 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-cni-path\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363588 kubelet[2684]: I0508 00:23:38.363528 2684 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b90ad5d5-9c73-492d-ac40-0214d42fee56-clustermesh-secrets\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363588 kubelet[2684]: I0508 00:23:38.363536 2684 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b90ad5d5-9c73-492d-ac40-0214d42fee56-hubble-tls\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363588 kubelet[2684]: I0508 00:23:38.363544 2684 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fd34716-2694-4828-81df-ef45891c2edc-cilium-config-path\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363588 kubelet[2684]: I0508 00:23:38.363555 2684 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dzplz\" (UniqueName: \"kubernetes.io/projected/6fd34716-2694-4828-81df-ef45891c2edc-kube-api-access-dzplz\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363771 kubelet[2684]: I0508 00:23:38.363563 2684 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-bpf-maps\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363771 kubelet[2684]: I0508 00:23:38.363573 2684 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-etc-cni-netd\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363771 kubelet[2684]: I0508 00:23:38.363581 2684 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-cilium-cgroup\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363771 kubelet[2684]: I0508 00:23:38.363589 2684 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-host-proc-sys-kernel\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363771 kubelet[2684]: I0508 00:23:38.363599 2684 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-xtables-lock\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.363771 kubelet[2684]: I0508 00:23:38.363607 2684 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b90ad5d5-9c73-492d-ac40-0214d42fee56-cilium-run\") on node \"172-232-0-241\" DevicePath \"\"" May 8 00:23:38.488592 systemd[1]: Removed slice kubepods-besteffort-pod6fd34716_2694_4828_81df_ef45891c2edc.slice - libcontainer container kubepods-besteffort-pod6fd34716_2694_4828_81df_ef45891c2edc.slice. May 8 00:23:38.857215 kubelet[2684]: E0508 00:23:38.856435 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:38.858285 kubelet[2684]: E0508 00:23:38.858066 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:38.864061 kubelet[2684]: I0508 00:23:38.863409 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd34716-2694-4828-81df-ef45891c2edc" path="/var/lib/kubelet/pods/6fd34716-2694-4828-81df-ef45891c2edc/volumes" May 8 00:23:38.869105 systemd[1]: Removed slice kubepods-burstable-podb90ad5d5_9c73_492d_ac40_0214d42fee56.slice - libcontainer container kubepods-burstable-podb90ad5d5_9c73_492d_ac40_0214d42fee56.slice. May 8 00:23:38.869200 systemd[1]: kubepods-burstable-podb90ad5d5_9c73_492d_ac40_0214d42fee56.slice: Consumed 6.536s CPU time, 126.8M memory peak, 136K read from disk, 13.3M written to disk. May 8 00:23:38.985646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9-rootfs.mount: Deactivated successfully. May 8 00:23:38.985755 systemd[1]: var-lib-kubelet-pods-6fd34716\x2d2694\x2d4828\x2d81df\x2def45891c2edc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddzplz.mount: Deactivated successfully. May 8 00:23:38.985845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0-rootfs.mount: Deactivated successfully. May 8 00:23:38.985921 systemd[1]: var-lib-kubelet-pods-b90ad5d5\x2d9c73\x2d492d\x2dac40\x2d0214d42fee56-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dksh2r.mount: Deactivated successfully. May 8 00:23:38.985993 systemd[1]: var-lib-kubelet-pods-b90ad5d5\x2d9c73\x2d492d\x2dac40\x2d0214d42fee56-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:23:38.986067 systemd[1]: var-lib-kubelet-pods-b90ad5d5\x2d9c73\x2d492d\x2dac40\x2d0214d42fee56-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:23:39.197312 kubelet[2684]: I0508 00:23:39.197171 2684 scope.go:117] "RemoveContainer" containerID="eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f" May 8 00:23:39.200908 containerd[1489]: time="2025-05-08T00:23:39.200864561Z" level=info msg="RemoveContainer for \"eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f\"" May 8 00:23:39.204482 containerd[1489]: time="2025-05-08T00:23:39.204418979Z" level=info msg="RemoveContainer for \"eb89c33e81ee09748c69edb71d79870b14c8f7f470f42868712fdd1dc2c6c03f\" returns successfully" May 8 00:23:39.204804 kubelet[2684]: I0508 00:23:39.204701 2684 scope.go:117] "RemoveContainer" containerID="ed49693295ed6bb85ff443ff6be5a386a0ab5989d3318d073edf49e041232754" May 8 00:23:39.206499 containerd[1489]: time="2025-05-08T00:23:39.206258138Z" level=info msg="RemoveContainer for \"ed49693295ed6bb85ff443ff6be5a386a0ab5989d3318d073edf49e041232754\"" May 8 00:23:39.208453 containerd[1489]: time="2025-05-08T00:23:39.208411425Z" level=info msg="RemoveContainer for \"ed49693295ed6bb85ff443ff6be5a386a0ab5989d3318d073edf49e041232754\" returns successfully" May 8 00:23:39.209611 kubelet[2684]: I0508 00:23:39.209580 2684 scope.go:117] "RemoveContainer" containerID="a1a4a05a569d293210b1d7bf93043624557ca0a0e9d52600f9b8389484082abb" May 8 00:23:39.210912 containerd[1489]: time="2025-05-08T00:23:39.210841510Z" level=info msg="RemoveContainer for \"a1a4a05a569d293210b1d7bf93043624557ca0a0e9d52600f9b8389484082abb\"" May 8 00:23:39.213222 containerd[1489]: time="2025-05-08T00:23:39.213201406Z" level=info msg="RemoveContainer for \"a1a4a05a569d293210b1d7bf93043624557ca0a0e9d52600f9b8389484082abb\" returns successfully" May 8 00:23:39.213732 kubelet[2684]: I0508 00:23:39.213692 2684 scope.go:117] "RemoveContainer" containerID="2aaf81a9bec3d4fe4621249978bf0de9d4d0fb6147674968423c8f667f3e71a9" May 8 00:23:39.214595 containerd[1489]: time="2025-05-08T00:23:39.214574917Z" level=info msg="RemoveContainer for \"2aaf81a9bec3d4fe4621249978bf0de9d4d0fb6147674968423c8f667f3e71a9\"" May 8 00:23:39.216984 containerd[1489]: time="2025-05-08T00:23:39.216965712Z" level=info msg="RemoveContainer for \"2aaf81a9bec3d4fe4621249978bf0de9d4d0fb6147674968423c8f667f3e71a9\" returns successfully" May 8 00:23:39.217163 kubelet[2684]: I0508 00:23:39.217081 2684 scope.go:117] "RemoveContainer" containerID="4557c1cf4e9ee72e68a6bbe239c824b1f2775c36b61be35bbf9dd51b16fd6746" May 8 00:23:39.217969 containerd[1489]: time="2025-05-08T00:23:39.217951356Z" level=info msg="RemoveContainer for \"4557c1cf4e9ee72e68a6bbe239c824b1f2775c36b61be35bbf9dd51b16fd6746\"" May 8 00:23:39.219866 containerd[1489]: time="2025-05-08T00:23:39.219847635Z" level=info msg="RemoveContainer for \"4557c1cf4e9ee72e68a6bbe239c824b1f2775c36b61be35bbf9dd51b16fd6746\" returns successfully" May 8 00:23:39.976215 sshd[4292]: Connection closed by 139.178.89.65 port 57366 May 8 00:23:39.977003 sshd-session[4290]: pam_unix(sshd:session): session closed for user core May 8 00:23:39.981169 systemd[1]: sshd@21-172.232.0.241:22-139.178.89.65:57366.service: Deactivated successfully. May 8 00:23:39.983767 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:23:39.984603 systemd-logind[1466]: Session 22 logged out. Waiting for processes to exit. May 8 00:23:39.985369 systemd-logind[1466]: Removed session 22. May 8 00:23:40.041634 systemd[1]: Started sshd@22-172.232.0.241:22-139.178.89.65:35656.service - OpenSSH per-connection server daemon (139.178.89.65:35656). May 8 00:23:40.370197 sshd[4451]: Accepted publickey for core from 139.178.89.65 port 35656 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:40.371927 sshd-session[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:40.376324 systemd-logind[1466]: New session 23 of user core. May 8 00:23:40.385575 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:23:40.854860 containerd[1489]: time="2025-05-08T00:23:40.854803907Z" level=info msg="StopPodSandbox for \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\"" May 8 00:23:40.856394 containerd[1489]: time="2025-05-08T00:23:40.855297854Z" level=info msg="TearDown network for sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" successfully" May 8 00:23:40.856394 containerd[1489]: time="2025-05-08T00:23:40.855334604Z" level=info msg="StopPodSandbox for \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" returns successfully" May 8 00:23:40.857472 containerd[1489]: time="2025-05-08T00:23:40.856817495Z" level=info msg="RemovePodSandbox for \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\"" May 8 00:23:40.857472 containerd[1489]: time="2025-05-08T00:23:40.856837635Z" level=info msg="Forcibly stopping sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\"" May 8 00:23:40.857472 containerd[1489]: time="2025-05-08T00:23:40.856898504Z" level=info msg="TearDown network for sandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" successfully" May 8 00:23:40.863872 containerd[1489]: time="2025-05-08T00:23:40.863839762Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:23:40.863986 containerd[1489]: time="2025-05-08T00:23:40.863970701Z" level=info msg="RemovePodSandbox \"45cfebf645288a18a34a87d5f904a401c6b9928c306c32d173203140a52a9af0\" returns successfully" May 8 00:23:40.864255 kubelet[2684]: I0508 00:23:40.864195 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b90ad5d5-9c73-492d-ac40-0214d42fee56" path="/var/lib/kubelet/pods/b90ad5d5-9c73-492d-ac40-0214d42fee56/volumes" May 8 00:23:40.868487 containerd[1489]: time="2025-05-08T00:23:40.868423565Z" level=info msg="StopPodSandbox for \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\"" May 8 00:23:40.868728 containerd[1489]: time="2025-05-08T00:23:40.868674643Z" level=info msg="TearDown network for sandbox \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\" successfully" May 8 00:23:40.868728 containerd[1489]: time="2025-05-08T00:23:40.868690703Z" level=info msg="StopPodSandbox for \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\" returns successfully" May 8 00:23:40.870283 containerd[1489]: time="2025-05-08T00:23:40.868992531Z" level=info msg="RemovePodSandbox for \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\"" May 8 00:23:40.870283 containerd[1489]: time="2025-05-08T00:23:40.869022302Z" level=info msg="Forcibly stopping sandbox \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\"" May 8 00:23:40.870283 containerd[1489]: time="2025-05-08T00:23:40.869065921Z" level=info msg="TearDown network for sandbox \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\" successfully" May 8 00:23:40.872430 containerd[1489]: time="2025-05-08T00:23:40.871576756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:23:40.872587 containerd[1489]: time="2025-05-08T00:23:40.872532090Z" level=info msg="RemovePodSandbox \"ae86fc58cbd1898477e5d81e9e059c476867496e721ec062c32ad9f2331bdad9\" returns successfully" May 8 00:23:40.949330 kubelet[2684]: E0508 00:23:40.949306 2684 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:23:41.056894 kubelet[2684]: I0508 00:23:41.056603 2684 topology_manager.go:215] "Topology Admit Handler" podUID="1a2e5815-0f7e-4421-b70f-1149d2d8a41e" podNamespace="kube-system" podName="cilium-ffspf" May 8 00:23:41.057280 kubelet[2684]: E0508 00:23:41.057186 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b90ad5d5-9c73-492d-ac40-0214d42fee56" containerName="mount-cgroup" May 8 00:23:41.057347 kubelet[2684]: E0508 00:23:41.057336 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b90ad5d5-9c73-492d-ac40-0214d42fee56" containerName="apply-sysctl-overwrites" May 8 00:23:41.057479 kubelet[2684]: E0508 00:23:41.057468 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b90ad5d5-9c73-492d-ac40-0214d42fee56" containerName="clean-cilium-state" May 8 00:23:41.057546 kubelet[2684]: E0508 00:23:41.057534 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b90ad5d5-9c73-492d-ac40-0214d42fee56" containerName="cilium-agent" May 8 00:23:41.057586 kubelet[2684]: E0508 00:23:41.057578 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b90ad5d5-9c73-492d-ac40-0214d42fee56" containerName="mount-bpf-fs" May 8 00:23:41.057729 kubelet[2684]: E0508 00:23:41.057717 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6fd34716-2694-4828-81df-ef45891c2edc" containerName="cilium-operator" May 8 00:23:41.057808 kubelet[2684]: I0508 00:23:41.057797 2684 memory_manager.go:354] "RemoveStaleState removing state" podUID="b90ad5d5-9c73-492d-ac40-0214d42fee56" containerName="cilium-agent" May 8 00:23:41.057861 kubelet[2684]: I0508 00:23:41.057841 2684 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd34716-2694-4828-81df-ef45891c2edc" containerName="cilium-operator" May 8 00:23:41.067341 kubelet[2684]: W0508 00:23:41.067320 2684 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172-232-0-241" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-232-0-241' and this object May 8 00:23:41.067665 systemd[1]: Created slice kubepods-burstable-pod1a2e5815_0f7e_4421_b70f_1149d2d8a41e.slice - libcontainer container kubepods-burstable-pod1a2e5815_0f7e_4421_b70f_1149d2d8a41e.slice. May 8 00:23:41.070644 kubelet[2684]: E0508 00:23:41.070628 2684 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172-232-0-241" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-232-0-241' and this object May 8 00:23:41.070768 kubelet[2684]: W0508 00:23:41.067617 2684 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172-232-0-241" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-232-0-241' and this object May 8 00:23:41.070768 kubelet[2684]: E0508 00:23:41.070728 2684 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172-232-0-241" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-232-0-241' and this object May 8 00:23:41.070768 kubelet[2684]: W0508 00:23:41.067655 2684 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172-232-0-241" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-232-0-241' and this object May 8 00:23:41.070768 kubelet[2684]: E0508 00:23:41.070743 2684 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172-232-0-241" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-232-0-241' and this object May 8 00:23:41.070768 kubelet[2684]: W0508 00:23:41.067704 2684 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172-232-0-241" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-232-0-241' and this object May 8 00:23:41.070883 kubelet[2684]: E0508 00:23:41.070756 2684 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172-232-0-241" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-232-0-241' and this object May 8 00:23:41.079821 kubelet[2684]: I0508 00:23:41.079717 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-cilium-run\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.079821 kubelet[2684]: I0508 00:23:41.079745 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-bpf-maps\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.079821 kubelet[2684]: I0508 00:23:41.079763 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-cilium-cgroup\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.079821 kubelet[2684]: I0508 00:23:41.079778 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-etc-cni-netd\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.079943 kubelet[2684]: I0508 00:23:41.079870 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-cilium-ipsec-secrets\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.079943 kubelet[2684]: I0508 00:23:41.079910 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-host-proc-sys-net\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.079943 kubelet[2684]: I0508 00:23:41.079938 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6wp\" (UniqueName: \"kubernetes.io/projected/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-kube-api-access-mb6wp\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.080008 kubelet[2684]: I0508 00:23:41.079961 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-xtables-lock\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.080008 kubelet[2684]: I0508 00:23:41.079976 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-host-proc-sys-kernel\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.080008 kubelet[2684]: I0508 00:23:41.079991 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-lib-modules\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.080008 kubelet[2684]: I0508 00:23:41.080007 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-clustermesh-secrets\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.080089 kubelet[2684]: I0508 00:23:41.080021 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-hubble-tls\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.080089 kubelet[2684]: I0508 00:23:41.080034 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-hostproc\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.080089 kubelet[2684]: I0508 00:23:41.080051 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-cni-path\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.080089 kubelet[2684]: I0508 00:23:41.080066 2684 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-cilium-config-path\") pod \"cilium-ffspf\" (UID: \"1a2e5815-0f7e-4421-b70f-1149d2d8a41e\") " pod="kube-system/cilium-ffspf" May 8 00:23:41.082711 sshd[4453]: Connection closed by 139.178.89.65 port 35656 May 8 00:23:41.082632 sshd-session[4451]: pam_unix(sshd:session): session closed for user core May 8 00:23:41.089104 systemd[1]: sshd@22-172.232.0.241:22-139.178.89.65:35656.service: Deactivated successfully. May 8 00:23:41.091888 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:23:41.094420 systemd-logind[1466]: Session 23 logged out. Waiting for processes to exit. May 8 00:23:41.096054 systemd-logind[1466]: Removed session 23. May 8 00:23:41.146721 systemd[1]: Started sshd@23-172.232.0.241:22-139.178.89.65:35668.service - OpenSSH per-connection server daemon (139.178.89.65:35668). May 8 00:23:41.496409 sshd[4466]: Accepted publickey for core from 139.178.89.65 port 35668 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:41.498756 sshd-session[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:41.504403 systemd-logind[1466]: New session 24 of user core. May 8 00:23:41.513600 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:23:41.737203 sshd[4469]: Connection closed by 139.178.89.65 port 35668 May 8 00:23:41.737826 sshd-session[4466]: pam_unix(sshd:session): session closed for user core May 8 00:23:41.742855 systemd[1]: sshd@23-172.232.0.241:22-139.178.89.65:35668.service: Deactivated successfully. May 8 00:23:41.745203 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:23:41.746001 systemd-logind[1466]: Session 24 logged out. Waiting for processes to exit. May 8 00:23:41.747134 systemd-logind[1466]: Removed session 24. May 8 00:23:41.801675 systemd[1]: Started sshd@24-172.232.0.241:22-139.178.89.65:35672.service - OpenSSH per-connection server daemon (139.178.89.65:35672). May 8 00:23:42.127685 sshd[4476]: Accepted publickey for core from 139.178.89.65 port 35672 ssh2: RSA SHA256:pibNW+8JyiZiCPlqRw4NQYJ+Adck1BbYu9myAO4iTB4 May 8 00:23:42.129258 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:23:42.134640 systemd-logind[1466]: New session 25 of user core. May 8 00:23:42.140628 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:23:42.181342 kubelet[2684]: E0508 00:23:42.181256 2684 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 8 00:23:42.181342 kubelet[2684]: E0508 00:23:42.181289 2684 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-ffspf: failed to sync secret cache: timed out waiting for the condition May 8 00:23:42.181725 kubelet[2684]: E0508 00:23:42.181363 2684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-hubble-tls podName:1a2e5815-0f7e-4421-b70f-1149d2d8a41e nodeName:}" failed. No retries permitted until 2025-05-08 00:23:42.681342523 +0000 UTC m=+181.926123047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-hubble-tls") pod "cilium-ffspf" (UID: "1a2e5815-0f7e-4421-b70f-1149d2d8a41e") : failed to sync secret cache: timed out waiting for the condition May 8 00:23:42.181725 kubelet[2684]: E0508 00:23:42.181386 2684 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 8 00:23:42.181725 kubelet[2684]: E0508 00:23:42.181417 2684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-cilium-config-path podName:1a2e5815-0f7e-4421-b70f-1149d2d8a41e nodeName:}" failed. No retries permitted until 2025-05-08 00:23:42.681408602 +0000 UTC m=+181.926189126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-cilium-config-path") pod "cilium-ffspf" (UID: "1a2e5815-0f7e-4421-b70f-1149d2d8a41e") : failed to sync configmap cache: timed out waiting for the condition May 8 00:23:42.181725 kubelet[2684]: E0508 00:23:42.181434 2684 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition May 8 00:23:42.181887 kubelet[2684]: E0508 00:23:42.181489 2684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-cilium-ipsec-secrets podName:1a2e5815-0f7e-4421-b70f-1149d2d8a41e nodeName:}" failed. No retries permitted until 2025-05-08 00:23:42.681481942 +0000 UTC m=+181.926262466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-cilium-ipsec-secrets") pod "cilium-ffspf" (UID: "1a2e5815-0f7e-4421-b70f-1149d2d8a41e") : failed to sync secret cache: timed out waiting for the condition May 8 00:23:42.182100 kubelet[2684]: E0508 00:23:42.181995 2684 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 8 00:23:42.182100 kubelet[2684]: E0508 00:23:42.182071 2684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-clustermesh-secrets podName:1a2e5815-0f7e-4421-b70f-1149d2d8a41e nodeName:}" failed. No retries permitted until 2025-05-08 00:23:42.682055818 +0000 UTC m=+181.926836332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/1a2e5815-0f7e-4421-b70f-1149d2d8a41e-clustermesh-secrets") pod "cilium-ffspf" (UID: "1a2e5815-0f7e-4421-b70f-1149d2d8a41e") : failed to sync secret cache: timed out waiting for the condition May 8 00:23:42.871440 kubelet[2684]: E0508 00:23:42.871402 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:42.872349 containerd[1489]: time="2025-05-08T00:23:42.871935528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ffspf,Uid:1a2e5815-0f7e-4421-b70f-1149d2d8a41e,Namespace:kube-system,Attempt:0,}" May 8 00:23:42.892940 containerd[1489]: time="2025-05-08T00:23:42.892713894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:23:42.892940 containerd[1489]: time="2025-05-08T00:23:42.892764844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:23:42.892940 containerd[1489]: time="2025-05-08T00:23:42.892775414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:23:42.892940 containerd[1489]: time="2025-05-08T00:23:42.892865203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:23:42.918583 systemd[1]: Started cri-containerd-a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f.scope - libcontainer container a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f. May 8 00:23:42.940656 containerd[1489]: time="2025-05-08T00:23:42.940356890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ffspf,Uid:1a2e5815-0f7e-4421-b70f-1149d2d8a41e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f\"" May 8 00:23:42.941279 kubelet[2684]: E0508 00:23:42.940967 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:42.943208 containerd[1489]: time="2025-05-08T00:23:42.943187843Z" level=info msg="CreateContainer within sandbox \"a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:23:42.950819 containerd[1489]: time="2025-05-08T00:23:42.950795547Z" level=info msg="CreateContainer within sandbox \"a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"11f556c52818ad8e830281e4b3df09fb51dad6361797f401f7898d9408cb3913\"" May 8 00:23:42.951879 containerd[1489]: time="2025-05-08T00:23:42.951225055Z" level=info msg="StartContainer for \"11f556c52818ad8e830281e4b3df09fb51dad6361797f401f7898d9408cb3913\"" May 8 00:23:42.984567 systemd[1]: Started cri-containerd-11f556c52818ad8e830281e4b3df09fb51dad6361797f401f7898d9408cb3913.scope - libcontainer container 11f556c52818ad8e830281e4b3df09fb51dad6361797f401f7898d9408cb3913. May 8 00:23:43.011426 containerd[1489]: time="2025-05-08T00:23:43.011376366Z" level=info msg="StartContainer for \"11f556c52818ad8e830281e4b3df09fb51dad6361797f401f7898d9408cb3913\" returns successfully" May 8 00:23:43.023985 systemd[1]: cri-containerd-11f556c52818ad8e830281e4b3df09fb51dad6361797f401f7898d9408cb3913.scope: Deactivated successfully. May 8 00:23:43.049651 containerd[1489]: time="2025-05-08T00:23:43.049569260Z" level=info msg="shim disconnected" id=11f556c52818ad8e830281e4b3df09fb51dad6361797f401f7898d9408cb3913 namespace=k8s.io May 8 00:23:43.049651 containerd[1489]: time="2025-05-08T00:23:43.049642899Z" level=warning msg="cleaning up after shim disconnected" id=11f556c52818ad8e830281e4b3df09fb51dad6361797f401f7898d9408cb3913 namespace=k8s.io May 8 00:23:43.049651 containerd[1489]: time="2025-05-08T00:23:43.049652189Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:23:43.207301 kubelet[2684]: E0508 00:23:43.205935 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:43.208202 containerd[1489]: time="2025-05-08T00:23:43.208162639Z" level=info msg="CreateContainer within sandbox \"a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:23:43.220597 containerd[1489]: time="2025-05-08T00:23:43.220461326Z" level=info msg="CreateContainer within sandbox \"a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"11cf59113e502b02251412fb86113f66b5ef96e45558cb79efec60abb201ac2c\"" May 8 00:23:43.221679 containerd[1489]: time="2025-05-08T00:23:43.221631889Z" level=info msg="StartContainer for \"11cf59113e502b02251412fb86113f66b5ef96e45558cb79efec60abb201ac2c\"" May 8 00:23:43.248757 systemd[1]: Started cri-containerd-11cf59113e502b02251412fb86113f66b5ef96e45558cb79efec60abb201ac2c.scope - libcontainer container 11cf59113e502b02251412fb86113f66b5ef96e45558cb79efec60abb201ac2c. May 8 00:23:43.275160 containerd[1489]: time="2025-05-08T00:23:43.274719534Z" level=info msg="StartContainer for \"11cf59113e502b02251412fb86113f66b5ef96e45558cb79efec60abb201ac2c\" returns successfully" May 8 00:23:43.284084 systemd[1]: cri-containerd-11cf59113e502b02251412fb86113f66b5ef96e45558cb79efec60abb201ac2c.scope: Deactivated successfully. May 8 00:23:43.311381 containerd[1489]: time="2025-05-08T00:23:43.311215368Z" level=info msg="shim disconnected" id=11cf59113e502b02251412fb86113f66b5ef96e45558cb79efec60abb201ac2c namespace=k8s.io May 8 00:23:43.311381 containerd[1489]: time="2025-05-08T00:23:43.311258428Z" level=warning msg="cleaning up after shim disconnected" id=11cf59113e502b02251412fb86113f66b5ef96e45558cb79efec60abb201ac2c namespace=k8s.io May 8 00:23:43.311381 containerd[1489]: time="2025-05-08T00:23:43.311266138Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:23:43.323306 containerd[1489]: time="2025-05-08T00:23:43.322618491Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:23:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:23:43.699431 systemd[1]: run-containerd-runc-k8s.io-a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f-runc.nXAZZz.mount: Deactivated successfully. May 8 00:23:44.208896 kubelet[2684]: E0508 00:23:44.208846 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:44.211534 containerd[1489]: time="2025-05-08T00:23:44.211191308Z" level=info msg="CreateContainer within sandbox \"a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:23:44.228964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2213484915.mount: Deactivated successfully. May 8 00:23:44.230794 containerd[1489]: time="2025-05-08T00:23:44.230201966Z" level=info msg="CreateContainer within sandbox \"a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"88791fdd304256ab0a93f7f8ac5361b06c5e7698de32c75cfdd1e403e6ddefe5\"" May 8 00:23:44.230905 containerd[1489]: time="2025-05-08T00:23:44.230872592Z" level=info msg="StartContainer for \"88791fdd304256ab0a93f7f8ac5361b06c5e7698de32c75cfdd1e403e6ddefe5\"" May 8 00:23:44.288581 systemd[1]: Started cri-containerd-88791fdd304256ab0a93f7f8ac5361b06c5e7698de32c75cfdd1e403e6ddefe5.scope - libcontainer container 88791fdd304256ab0a93f7f8ac5361b06c5e7698de32c75cfdd1e403e6ddefe5. May 8 00:23:44.350089 containerd[1489]: time="2025-05-08T00:23:44.350015840Z" level=info msg="StartContainer for \"88791fdd304256ab0a93f7f8ac5361b06c5e7698de32c75cfdd1e403e6ddefe5\" returns successfully" May 8 00:23:44.351266 systemd[1]: cri-containerd-88791fdd304256ab0a93f7f8ac5361b06c5e7698de32c75cfdd1e403e6ddefe5.scope: Deactivated successfully. May 8 00:23:44.375888 containerd[1489]: time="2025-05-08T00:23:44.375819299Z" level=info msg="shim disconnected" id=88791fdd304256ab0a93f7f8ac5361b06c5e7698de32c75cfdd1e403e6ddefe5 namespace=k8s.io May 8 00:23:44.375888 containerd[1489]: time="2025-05-08T00:23:44.375873098Z" level=warning msg="cleaning up after shim disconnected" id=88791fdd304256ab0a93f7f8ac5361b06c5e7698de32c75cfdd1e403e6ddefe5 namespace=k8s.io May 8 00:23:44.375888 containerd[1489]: time="2025-05-08T00:23:44.375881248Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:23:44.698897 systemd[1]: run-containerd-runc-k8s.io-88791fdd304256ab0a93f7f8ac5361b06c5e7698de32c75cfdd1e403e6ddefe5-runc.pFT2s5.mount: Deactivated successfully. May 8 00:23:44.699023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88791fdd304256ab0a93f7f8ac5361b06c5e7698de32c75cfdd1e403e6ddefe5-rootfs.mount: Deactivated successfully. May 8 00:23:44.777355 kubelet[2684]: I0508 00:23:44.777293 2684 setters.go:580] "Node became not ready" node="172-232-0-241" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:23:44Z","lastTransitionTime":"2025-05-08T00:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:23:45.212423 kubelet[2684]: E0508 00:23:45.212392 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:45.216491 containerd[1489]: time="2025-05-08T00:23:45.215575770Z" level=info msg="CreateContainer within sandbox \"a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:23:45.232995 containerd[1489]: time="2025-05-08T00:23:45.232947867Z" level=info msg="CreateContainer within sandbox \"a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"af91ee267231f9c4fc545e1d92b9486cd1b133dba96932d9f36b1467ad5f01bf\"" May 8 00:23:45.235265 containerd[1489]: time="2025-05-08T00:23:45.234429249Z" level=info msg="StartContainer for \"af91ee267231f9c4fc545e1d92b9486cd1b133dba96932d9f36b1467ad5f01bf\"" May 8 00:23:45.267569 systemd[1]: Started cri-containerd-af91ee267231f9c4fc545e1d92b9486cd1b133dba96932d9f36b1467ad5f01bf.scope - libcontainer container af91ee267231f9c4fc545e1d92b9486cd1b133dba96932d9f36b1467ad5f01bf. May 8 00:23:45.292822 systemd[1]: cri-containerd-af91ee267231f9c4fc545e1d92b9486cd1b133dba96932d9f36b1467ad5f01bf.scope: Deactivated successfully. May 8 00:23:45.294432 containerd[1489]: time="2025-05-08T00:23:45.294400438Z" level=info msg="StartContainer for \"af91ee267231f9c4fc545e1d92b9486cd1b133dba96932d9f36b1467ad5f01bf\" returns successfully" May 8 00:23:45.312981 containerd[1489]: time="2025-05-08T00:23:45.312733341Z" level=info msg="shim disconnected" id=af91ee267231f9c4fc545e1d92b9486cd1b133dba96932d9f36b1467ad5f01bf namespace=k8s.io May 8 00:23:45.312981 containerd[1489]: time="2025-05-08T00:23:45.312822210Z" level=warning msg="cleaning up after shim disconnected" id=af91ee267231f9c4fc545e1d92b9486cd1b133dba96932d9f36b1467ad5f01bf namespace=k8s.io May 8 00:23:45.312981 containerd[1489]: time="2025-05-08T00:23:45.312832280Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:23:45.699099 systemd[1]: run-containerd-runc-k8s.io-af91ee267231f9c4fc545e1d92b9486cd1b133dba96932d9f36b1467ad5f01bf-runc.3okLUV.mount: Deactivated successfully. May 8 00:23:45.699424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af91ee267231f9c4fc545e1d92b9486cd1b133dba96932d9f36b1467ad5f01bf-rootfs.mount: Deactivated successfully. May 8 00:23:45.950187 kubelet[2684]: E0508 00:23:45.950070 2684 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:23:46.216541 kubelet[2684]: E0508 00:23:46.216409 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:46.223468 containerd[1489]: time="2025-05-08T00:23:46.220220357Z" level=info msg="CreateContainer within sandbox \"a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:23:46.246393 containerd[1489]: time="2025-05-08T00:23:46.245945317Z" level=info msg="CreateContainer within sandbox \"a1051493e57952132d9ba887cdb39118df0372de967c4ed9b2929efb48fdc02f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3719511671da9e40f9078089298cccf964167708c7187224a324bae7d118dd67\"" May 8 00:23:46.246147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4043333442.mount: Deactivated successfully. May 8 00:23:46.248048 containerd[1489]: time="2025-05-08T00:23:46.247180279Z" level=info msg="StartContainer for \"3719511671da9e40f9078089298cccf964167708c7187224a324bae7d118dd67\"" May 8 00:23:46.282582 systemd[1]: Started cri-containerd-3719511671da9e40f9078089298cccf964167708c7187224a324bae7d118dd67.scope - libcontainer container 3719511671da9e40f9078089298cccf964167708c7187224a324bae7d118dd67. May 8 00:23:46.314967 containerd[1489]: time="2025-05-08T00:23:46.313212887Z" level=info msg="StartContainer for \"3719511671da9e40f9078089298cccf964167708c7187224a324bae7d118dd67\" returns successfully" May 8 00:23:46.698992 systemd[1]: run-containerd-runc-k8s.io-3719511671da9e40f9078089298cccf964167708c7187224a324bae7d118dd67-runc.ztqZV4.mount: Deactivated successfully. May 8 00:23:46.784557 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:23:47.220827 kubelet[2684]: E0508 00:23:47.220791 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:47.233317 kubelet[2684]: I0508 00:23:47.233023 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ffspf" podStartSLOduration=6.233007615 podStartE2EDuration="6.233007615s" podCreationTimestamp="2025-05-08 00:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:23:47.232782286 +0000 UTC m=+186.477562810" watchObservedRunningTime="2025-05-08 00:23:47.233007615 +0000 UTC m=+186.477788139" May 8 00:23:48.541199 systemd[1]: run-containerd-runc-k8s.io-3719511671da9e40f9078089298cccf964167708c7187224a324bae7d118dd67-runc.UftvQY.mount: Deactivated successfully. May 8 00:23:48.874149 kubelet[2684]: E0508 00:23:48.873771 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:49.631587 systemd-networkd[1396]: lxc_health: Link UP May 8 00:23:49.640759 systemd-networkd[1396]: lxc_health: Gained carrier May 8 00:23:50.666270 systemd[1]: run-containerd-runc-k8s.io-3719511671da9e40f9078089298cccf964167708c7187224a324bae7d118dd67-runc.05WH2r.mount: Deactivated successfully. May 8 00:23:50.872922 kubelet[2684]: E0508 00:23:50.872875 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:51.228064 kubelet[2684]: E0508 00:23:51.227972 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:51.655660 systemd-networkd[1396]: lxc_health: Gained IPv6LL May 8 00:23:52.229223 kubelet[2684]: E0508 00:23:52.228991 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" May 8 00:23:54.948487 systemd[1]: run-containerd-runc-k8s.io-3719511671da9e40f9078089298cccf964167708c7187224a324bae7d118dd67-runc.aTDtBw.mount: Deactivated successfully. May 8 00:23:55.046637 sshd[4478]: Connection closed by 139.178.89.65 port 35672 May 8 00:23:55.047277 sshd-session[4476]: pam_unix(sshd:session): session closed for user core May 8 00:23:55.051593 systemd[1]: sshd@24-172.232.0.241:22-139.178.89.65:35672.service: Deactivated successfully. May 8 00:23:55.054052 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:23:55.055231 systemd-logind[1466]: Session 25 logged out. Waiting for processes to exit. May 8 00:23:55.056291 systemd-logind[1466]: Removed session 25.