Aug 13 00:53:21.942522 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:53:21.942542 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:53:21.942550 kernel: BIOS-provided physical RAM map: Aug 13 00:53:21.942556 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 00:53:21.942561 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 00:53:21.942567 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:53:21.942574 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Aug 13 00:53:21.942579 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Aug 13 00:53:21.942586 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:53:21.942592 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 00:53:21.942598 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:53:21.942603 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:53:21.942609 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 00:53:21.942614 kernel: NX (Execute Disable) protection: active Aug 13 00:53:21.942622 kernel: SMBIOS 2.8 present. Aug 13 00:53:21.942637 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Aug 13 00:53:21.942643 kernel: Hypervisor detected: KVM Aug 13 00:53:21.942649 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:53:21.942658 kernel: kvm-clock: cpu 0, msr 6d19e001, primary cpu clock Aug 13 00:53:21.942664 kernel: kvm-clock: using sched offset of 3839449488 cycles Aug 13 00:53:21.942671 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:53:21.942677 kernel: tsc: Detected 2794.750 MHz processor Aug 13 00:53:21.942684 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:53:21.942692 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:53:21.942699 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Aug 13 00:53:21.942705 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:53:21.942711 kernel: Using GB pages for direct mapping Aug 13 00:53:21.942717 kernel: ACPI: Early table checksum verification disabled Aug 13 00:53:21.942724 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Aug 13 00:53:21.942730 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:21.942737 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:21.942743 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:21.942750 kernel: ACPI: FACS 0x000000009CFE0000 000040 Aug 13 00:53:21.942757 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:21.942764 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:21.942771 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:21.942778 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:21.942786 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Aug 13 00:53:21.942792 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Aug 13 00:53:21.942799 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Aug 13 00:53:21.942809 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Aug 13 00:53:21.942815 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Aug 13 00:53:21.942822 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Aug 13 00:53:21.942841 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Aug 13 00:53:21.942847 kernel: No NUMA configuration found Aug 13 00:53:21.942854 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Aug 13 00:53:21.942863 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Aug 13 00:53:21.942870 kernel: Zone ranges: Aug 13 00:53:21.942972 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:53:21.942987 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Aug 13 00:53:21.942998 kernel: Normal empty Aug 13 00:53:21.943004 kernel: Movable zone start for each node Aug 13 00:53:21.943018 kernel: Early memory node ranges Aug 13 00:53:21.943025 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:53:21.943032 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Aug 13 00:53:21.943046 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Aug 13 00:53:21.943054 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:53:21.943061 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:53:21.943067 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Aug 13 00:53:21.943074 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:53:21.943081 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:53:21.943087 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:53:21.943106 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:53:21.943113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:53:21.943120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:53:21.943132 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:53:21.943151 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:53:21.943160 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:53:21.943174 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:53:21.943181 kernel: TSC deadline timer available Aug 13 00:53:21.943188 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 00:53:21.943195 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:53:21.943207 kernel: kvm-guest: setup PV sched yield Aug 13 00:53:21.943223 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 00:53:21.943233 kernel: Booting paravirtualized kernel on KVM Aug 13 00:53:21.943240 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:53:21.943247 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Aug 13 00:53:21.943258 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Aug 13 00:53:21.943273 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Aug 13 00:53:21.943280 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 00:53:21.943293 kernel: kvm-guest: setup async PF for cpu 0 Aug 13 00:53:21.943300 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Aug 13 00:53:21.943307 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:53:21.943316 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:53:21.943323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Aug 13 00:53:21.943329 kernel: Policy zone: DMA32 Aug 13 00:53:21.943337 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:53:21.943345 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:53:21.943351 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:53:21.943363 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:53:21.943388 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:53:21.943429 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 134796K reserved, 0K cma-reserved) Aug 13 00:53:21.943446 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:53:21.943461 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:53:21.943473 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:53:21.943491 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:53:21.943511 kernel: rcu: RCU event tracing is enabled. Aug 13 00:53:21.943528 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:53:21.943540 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:53:21.943557 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:53:21.943576 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:53:21.943590 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:53:21.943599 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 00:53:21.943613 kernel: random: crng init done Aug 13 00:53:21.943621 kernel: Console: colour VGA+ 80x25 Aug 13 00:53:21.943628 kernel: printk: console [ttyS0] enabled Aug 13 00:53:21.943635 kernel: ACPI: Core revision 20210730 Aug 13 00:53:21.943642 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:53:21.943648 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:53:21.943657 kernel: x2apic enabled Aug 13 00:53:21.943664 kernel: Switched APIC routing to physical x2apic. Aug 13 00:53:21.943673 kernel: kvm-guest: setup PV IPIs Aug 13 00:53:21.943685 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:53:21.943692 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 00:53:21.943704 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 00:53:21.943711 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:53:21.943718 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:53:21.943724 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:53:21.943752 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:53:21.943771 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:53:21.943786 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:53:21.943795 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 00:53:21.943810 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 00:53:21.944269 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:53:21.944278 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 00:53:21.944285 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:53:21.944292 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:53:21.944302 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:53:21.944309 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:53:21.944317 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:53:21.944324 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:53:21.944331 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:53:21.944338 kernel: LSM: Security Framework initializing Aug 13 00:53:21.944345 kernel: SELinux: Initializing. Aug 13 00:53:21.944352 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:53:21.944363 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:53:21.944370 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 00:53:21.944377 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:53:21.944384 kernel: ... version: 0 Aug 13 00:53:21.944391 kernel: ... bit width: 48 Aug 13 00:53:21.944398 kernel: ... generic registers: 6 Aug 13 00:53:21.944405 kernel: ... value mask: 0000ffffffffffff Aug 13 00:53:21.944412 kernel: ... max period: 00007fffffffffff Aug 13 00:53:21.944420 kernel: ... fixed-purpose events: 0 Aug 13 00:53:21.944431 kernel: ... event mask: 000000000000003f Aug 13 00:53:21.944438 kernel: signal: max sigframe size: 1776 Aug 13 00:53:21.944445 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:53:21.944452 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:53:21.944459 kernel: x86: Booting SMP configuration: Aug 13 00:53:21.944466 kernel: .... node #0, CPUs: #1 Aug 13 00:53:21.944473 kernel: kvm-clock: cpu 1, msr 6d19e041, secondary cpu clock Aug 13 00:53:21.944480 kernel: kvm-guest: setup async PF for cpu 1 Aug 13 00:53:21.944487 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Aug 13 00:53:21.944496 kernel: #2 Aug 13 00:53:21.944503 kernel: kvm-clock: cpu 2, msr 6d19e081, secondary cpu clock Aug 13 00:53:21.944510 kernel: kvm-guest: setup async PF for cpu 2 Aug 13 00:53:21.944517 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Aug 13 00:53:21.944524 kernel: #3 Aug 13 00:53:21.944534 kernel: kvm-clock: cpu 3, msr 6d19e0c1, secondary cpu clock Aug 13 00:53:21.944540 kernel: kvm-guest: setup async PF for cpu 3 Aug 13 00:53:21.944548 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Aug 13 00:53:21.944555 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:53:21.944566 kernel: smpboot: Max logical packages: 1 Aug 13 00:53:21.944573 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 00:53:21.944580 kernel: devtmpfs: initialized Aug 13 00:53:21.944587 kernel: x86/mm: Memory block size: 128MB Aug 13 00:53:21.944594 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:53:21.944601 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:53:21.944608 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:53:21.944616 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:53:21.944622 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:53:21.944631 kernel: audit: type=2000 audit(1755046401.308:1): state=initialized audit_enabled=0 res=1 Aug 13 00:53:21.944638 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:53:21.944645 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:53:21.944652 kernel: cpuidle: using governor menu Aug 13 00:53:21.944659 kernel: ACPI: bus type PCI registered Aug 13 00:53:21.944665 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:53:21.944673 kernel: dca service started, version 1.12.1 Aug 13 00:53:21.944680 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 00:53:21.944687 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Aug 13 00:53:21.944695 kernel: PCI: Using configuration type 1 for base access Aug 13 00:53:21.944703 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:53:21.944710 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:53:21.944717 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:53:21.944724 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:53:21.944731 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:53:21.944738 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:53:21.944745 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:53:21.944752 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:53:21.944760 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:53:21.944768 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:53:21.944775 kernel: ACPI: Interpreter enabled Aug 13 00:53:21.944782 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:53:21.944791 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:53:21.944799 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:53:21.944806 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:53:21.944813 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:53:21.944990 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:53:21.946809 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:53:21.946933 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:53:21.946943 kernel: PCI host bridge to bus 0000:00 Aug 13 00:53:21.947043 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:53:21.947156 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:53:21.947267 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:53:21.947526 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 00:53:21.948618 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:53:21.948687 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Aug 13 00:53:21.948752 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:53:21.948872 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 00:53:21.948967 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 00:53:21.949054 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 13 00:53:21.949143 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 13 00:53:21.949217 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 13 00:53:21.949289 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:53:21.949430 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 00:53:21.949574 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 13 00:53:21.949698 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 13 00:53:21.950614 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 00:53:21.950717 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 00:53:21.950794 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 00:53:21.950881 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 13 00:53:21.950953 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 00:53:21.951043 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:53:21.951126 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Aug 13 00:53:21.951202 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Aug 13 00:53:21.951296 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Aug 13 00:53:21.951392 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 13 00:53:21.951489 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 00:53:21.951589 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:53:21.951699 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 00:53:21.951774 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Aug 13 00:53:21.951879 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Aug 13 00:53:21.951970 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 00:53:21.952044 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 00:53:21.952054 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:53:21.952061 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:53:21.952069 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:53:21.952076 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:53:21.952083 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:53:21.952094 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:53:21.952109 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:53:21.952116 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:53:21.952123 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:53:21.952130 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:53:21.952137 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:53:21.952144 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:53:21.952154 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:53:21.952161 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:53:21.952170 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:53:21.952177 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:53:21.952184 kernel: iommu: Default domain type: Translated Aug 13 00:53:21.952194 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:53:21.952270 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:53:21.952341 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:53:21.952422 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:53:21.952433 kernel: vgaarb: loaded Aug 13 00:53:21.952440 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:53:21.952450 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:53:21.952457 kernel: PTP clock support registered Aug 13 00:53:21.952464 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:53:21.952471 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:53:21.952478 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 00:53:21.952485 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Aug 13 00:53:21.952492 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:53:21.952500 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:53:21.952507 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:53:21.952515 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:53:21.952522 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:53:21.952530 kernel: pnp: PnP ACPI init Aug 13 00:53:21.952636 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:53:21.952647 kernel: pnp: PnP ACPI: found 6 devices Aug 13 00:53:21.952654 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:53:21.952661 kernel: NET: Registered PF_INET protocol family Aug 13 00:53:21.952668 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:53:21.952677 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:53:21.952685 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:53:21.952692 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:53:21.952699 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 13 00:53:21.952706 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:53:21.952713 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:53:21.952721 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:53:21.952730 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:53:21.952741 kernel: NET: Registered PF_XDP protocol family Aug 13 00:53:21.952841 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:53:21.952912 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:53:21.952985 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:53:21.953080 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 00:53:21.953188 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:53:21.953285 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Aug 13 00:53:21.953297 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:53:21.953304 kernel: Initialise system trusted keyrings Aug 13 00:53:21.953322 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:53:21.953340 kernel: Key type asymmetric registered Aug 13 00:53:21.953349 kernel: Asymmetric key parser 'x509' registered Aug 13 00:53:21.953361 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:53:21.953368 kernel: io scheduler mq-deadline registered Aug 13 00:53:21.953375 kernel: io scheduler kyber registered Aug 13 00:53:21.953386 kernel: io scheduler bfq registered Aug 13 00:53:21.953395 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:53:21.953405 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:53:21.953418 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:53:21.953442 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 00:53:21.953460 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:53:21.953470 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:53:21.953477 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:53:21.953484 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:53:21.953491 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:53:21.953500 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:53:21.953656 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 00:53:21.953891 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 00:53:21.954695 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T00:53:21 UTC (1755046401) Aug 13 00:53:21.954793 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:53:21.954804 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:53:21.954812 kernel: Segment Routing with IPv6 Aug 13 00:53:21.954819 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:53:21.954839 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:53:21.954846 kernel: Key type dns_resolver registered Aug 13 00:53:21.954857 kernel: IPI shorthand broadcast: enabled Aug 13 00:53:21.954864 kernel: sched_clock: Marking stable (477122454, 101243872)->(595025442, -16659116) Aug 13 00:53:21.954871 kernel: registered taskstats version 1 Aug 13 00:53:21.954878 kernel: Loading compiled-in X.509 certificates Aug 13 00:53:21.954885 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:53:21.954892 kernel: Key type .fscrypt registered Aug 13 00:53:21.954899 kernel: Key type fscrypt-provisioning registered Aug 13 00:53:21.954906 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:53:21.954915 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:53:21.954922 kernel: ima: No architecture policies found Aug 13 00:53:21.954929 kernel: clk: Disabling unused clocks Aug 13 00:53:21.954936 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:53:21.954943 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:53:21.954951 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:53:21.954959 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:53:21.954968 kernel: Run /init as init process Aug 13 00:53:21.954977 kernel: with arguments: Aug 13 00:53:21.954987 kernel: /init Aug 13 00:53:21.954998 kernel: with environment: Aug 13 00:53:21.955016 kernel: HOME=/ Aug 13 00:53:21.955023 kernel: TERM=linux Aug 13 00:53:21.955030 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:53:21.955039 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:53:21.955049 systemd[1]: Detected virtualization kvm. Aug 13 00:53:21.955063 systemd[1]: Detected architecture x86-64. Aug 13 00:53:21.955079 systemd[1]: Running in initrd. Aug 13 00:53:21.955089 systemd[1]: No hostname configured, using default hostname. Aug 13 00:53:21.955109 systemd[1]: Hostname set to . Aug 13 00:53:21.955130 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:53:21.955142 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:53:21.955150 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:53:21.955157 systemd[1]: Reached target cryptsetup.target. Aug 13 00:53:21.955165 systemd[1]: Reached target paths.target. Aug 13 00:53:21.955172 systemd[1]: Reached target slices.target. Aug 13 00:53:21.955182 systemd[1]: Reached target swap.target. Aug 13 00:53:21.955196 systemd[1]: Reached target timers.target. Aug 13 00:53:21.955206 systemd[1]: Listening on iscsid.socket. Aug 13 00:53:21.955214 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:53:21.955222 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:53:21.955234 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:53:21.955244 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:53:21.955255 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:53:21.955262 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:53:21.955270 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:53:21.955278 systemd[1]: Reached target sockets.target. Aug 13 00:53:21.955286 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:53:21.955294 systemd[1]: Finished network-cleanup.service. Aug 13 00:53:21.955301 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:53:21.955326 systemd[1]: Starting systemd-journald.service... Aug 13 00:53:21.955348 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:53:21.955358 systemd[1]: Starting systemd-resolved.service... Aug 13 00:53:21.955366 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:53:21.955373 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:53:21.955381 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:53:21.955389 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:53:21.955397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:53:21.955414 systemd-journald[197]: Journal started Aug 13 00:53:21.955490 systemd-journald[197]: Runtime Journal (/run/log/journal/87099a11444e4748a21e0b5d6bd2cc1e) is 6.0M, max 48.5M, 42.5M free. Aug 13 00:53:21.940279 systemd-modules-load[198]: Inserted module 'overlay' Aug 13 00:53:21.990467 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:53:21.990497 kernel: Bridge firewalling registered Aug 13 00:53:21.990510 kernel: audit: type=1130 audit(1755046401.989:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:21.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:21.970670 systemd-resolved[199]: Positive Trust Anchors: Aug 13 00:53:21.998053 systemd[1]: Started systemd-journald.service. Aug 13 00:53:21.998086 kernel: audit: type=1130 audit(1755046401.995:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:21.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:21.970690 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:53:22.006644 kernel: audit: type=1130 audit(1755046402.000:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:21.970717 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:53:22.016769 kernel: audit: type=1130 audit(1755046402.006:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:21.974724 systemd-resolved[199]: Defaulting to hostname 'linux'. Aug 13 00:53:21.989253 systemd-modules-load[198]: Inserted module 'br_netfilter' Aug 13 00:53:21.997057 systemd[1]: Started systemd-resolved.service. Aug 13 00:53:22.001653 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:53:22.007516 systemd[1]: Reached target nss-lookup.target. Aug 13 00:53:22.018908 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:53:22.024178 kernel: SCSI subsystem initialized Aug 13 00:53:22.040993 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:53:22.041128 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:53:22.043982 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:53:22.044428 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:53:22.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.050713 systemd-modules-load[198]: Inserted module 'dm_multipath' Aug 13 00:53:22.052050 kernel: audit: type=1130 audit(1755046402.044:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.051466 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:53:22.053756 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:53:22.060898 kernel: audit: type=1130 audit(1755046402.054:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.057168 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:53:22.065344 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:53:22.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.069335 dracut-cmdline[216]: dracut-dracut-053 Aug 13 00:53:22.071054 kernel: audit: type=1130 audit(1755046402.066:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.072910 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:53:22.160869 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:53:22.179875 kernel: iscsi: registered transport (tcp) Aug 13 00:53:22.201999 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:53:22.202062 kernel: QLogic iSCSI HBA Driver Aug 13 00:53:22.230010 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:53:22.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.231601 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:53:22.235998 kernel: audit: type=1130 audit(1755046402.229:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.280915 kernel: raid6: avx2x4 gen() 24812 MB/s Aug 13 00:53:22.297878 kernel: raid6: avx2x4 xor() 6157 MB/s Aug 13 00:53:22.314865 kernel: raid6: avx2x2 gen() 29340 MB/s Aug 13 00:53:22.331865 kernel: raid6: avx2x2 xor() 17214 MB/s Aug 13 00:53:22.348861 kernel: raid6: avx2x1 gen() 25335 MB/s Aug 13 00:53:22.365866 kernel: raid6: avx2x1 xor() 14044 MB/s Aug 13 00:53:22.382858 kernel: raid6: sse2x4 gen() 14488 MB/s Aug 13 00:53:22.399856 kernel: raid6: sse2x4 xor() 6665 MB/s Aug 13 00:53:22.416858 kernel: raid6: sse2x2 gen() 15906 MB/s Aug 13 00:53:22.433868 kernel: raid6: sse2x2 xor() 9528 MB/s Aug 13 00:53:22.450866 kernel: raid6: sse2x1 gen() 11704 MB/s Aug 13 00:53:22.468220 kernel: raid6: sse2x1 xor() 7520 MB/s Aug 13 00:53:22.468271 kernel: raid6: using algorithm avx2x2 gen() 29340 MB/s Aug 13 00:53:22.468281 kernel: raid6: .... xor() 17214 MB/s, rmw enabled Aug 13 00:53:22.468933 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:53:22.482863 kernel: xor: automatically using best checksumming function avx Aug 13 00:53:22.578860 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:53:22.586565 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:53:22.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.587000 audit: BPF prog-id=7 op=LOAD Aug 13 00:53:22.590000 audit: BPF prog-id=8 op=LOAD Aug 13 00:53:22.591399 systemd[1]: Starting systemd-udevd.service... Aug 13 00:53:22.592889 kernel: audit: type=1130 audit(1755046402.586:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.604697 systemd-udevd[399]: Using default interface naming scheme 'v252'. Aug 13 00:53:22.608802 systemd[1]: Started systemd-udevd.service. Aug 13 00:53:22.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.610067 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:53:22.621251 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Aug 13 00:53:22.644673 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:53:22.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.647043 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:53:22.686663 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:53:22.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:22.720447 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:53:22.727593 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:53:22.727610 kernel: GPT:9289727 != 19775487 Aug 13 00:53:22.727623 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:53:22.727641 kernel: GPT:9289727 != 19775487 Aug 13 00:53:22.727653 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:53:22.727665 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:22.727678 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:53:22.732944 kernel: libata version 3.00 loaded. Aug 13 00:53:22.744276 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:53:22.744322 kernel: AES CTR mode by8 optimization enabled Aug 13 00:53:22.744340 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:53:22.870474 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:53:22.870498 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 00:53:22.870633 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:53:22.870760 kernel: scsi host0: ahci Aug 13 00:53:22.870944 kernel: scsi host1: ahci Aug 13 00:53:22.871097 kernel: scsi host2: ahci Aug 13 00:53:22.871235 kernel: scsi host3: ahci Aug 13 00:53:22.871378 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (454) Aug 13 00:53:22.871394 kernel: scsi host4: ahci Aug 13 00:53:22.871532 kernel: scsi host5: ahci Aug 13 00:53:22.871690 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Aug 13 00:53:22.871705 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Aug 13 00:53:22.871718 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Aug 13 00:53:22.871731 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Aug 13 00:53:22.871747 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Aug 13 00:53:22.871760 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Aug 13 00:53:22.866706 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:53:22.919103 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:53:22.920127 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:53:22.925167 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:53:22.931610 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:53:22.933202 systemd[1]: Starting disk-uuid.service... Aug 13 00:53:22.942855 disk-uuid[524]: Primary Header is updated. Aug 13 00:53:22.942855 disk-uuid[524]: Secondary Entries is updated. Aug 13 00:53:22.942855 disk-uuid[524]: Secondary Header is updated. Aug 13 00:53:22.946492 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:22.948847 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:22.952861 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:23.183098 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:23.183158 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:23.183169 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:23.184857 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:23.185894 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 00:53:23.186857 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:23.186881 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 00:53:23.188125 kernel: ata3.00: applying bridge limits Aug 13 00:53:23.188168 kernel: ata3.00: configured for UDMA/100 Aug 13 00:53:23.190846 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 00:53:23.220849 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 00:53:23.237489 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:53:23.237502 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 00:53:23.952846 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:23.953129 disk-uuid[525]: The operation has completed successfully. Aug 13 00:53:23.978285 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:53:23.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:23.978379 systemd[1]: Finished disk-uuid.service. Aug 13 00:53:23.983165 systemd[1]: Starting verity-setup.service... Aug 13 00:53:23.995884 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 00:53:24.014199 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:53:24.016438 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:53:24.018308 systemd[1]: Finished verity-setup.service. Aug 13 00:53:24.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.081651 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:53:24.083048 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:53:24.082231 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:53:24.083086 systemd[1]: Starting ignition-setup.service... Aug 13 00:53:24.084224 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:53:24.096461 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:53:24.096485 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:53:24.096495 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:53:24.104547 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:53:24.112455 systemd[1]: Finished ignition-setup.service. Aug 13 00:53:24.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.114065 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:53:24.157657 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:53:24.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.159000 audit: BPF prog-id=9 op=LOAD Aug 13 00:53:24.161298 systemd[1]: Starting systemd-networkd.service... Aug 13 00:53:24.177474 ignition[647]: Ignition 2.14.0 Aug 13 00:53:24.177483 ignition[647]: Stage: fetch-offline Aug 13 00:53:24.177596 ignition[647]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:24.177606 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:24.177715 ignition[647]: parsed url from cmdline: "" Aug 13 00:53:24.177718 ignition[647]: no config URL provided Aug 13 00:53:24.177722 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:53:24.177729 ignition[647]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:53:24.177747 ignition[647]: op(1): [started] loading QEMU firmware config module Aug 13 00:53:24.177751 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:53:24.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.185586 ignition[647]: op(1): [finished] loading QEMU firmware config module Aug 13 00:53:24.185605 systemd-networkd[716]: lo: Link UP Aug 13 00:53:24.185609 systemd-networkd[716]: lo: Gained carrier Aug 13 00:53:24.186090 systemd-networkd[716]: Enumeration completed Aug 13 00:53:24.186195 systemd[1]: Started systemd-networkd.service. Aug 13 00:53:24.186597 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:53:24.187456 systemd[1]: Reached target network.target. Aug 13 00:53:24.187917 systemd-networkd[716]: eth0: Link UP Aug 13 00:53:24.187921 systemd-networkd[716]: eth0: Gained carrier Aug 13 00:53:24.189062 systemd[1]: Starting iscsiuio.service... Aug 13 00:53:24.210709 systemd[1]: Started iscsiuio.service. Aug 13 00:53:24.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.213694 systemd[1]: Starting iscsid.service... Aug 13 00:53:24.217520 iscsid[722]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:53:24.217520 iscsid[722]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:53:24.217520 iscsid[722]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:53:24.217520 iscsid[722]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:53:24.217520 iscsid[722]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:53:24.227330 iscsid[722]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:53:24.226415 systemd[1]: Started iscsid.service. Aug 13 00:53:24.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.233867 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:53:24.245806 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:53:24.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.246363 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:53:24.249385 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:53:24.249598 systemd[1]: Reached target remote-fs.target. Aug 13 00:53:24.253072 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:53:24.260207 ignition[647]: parsing config with SHA512: ab521a9f0788afadfd12129cab9437c59bf8b2d976097c3fa0bb3ae955741bdc5602c121c3da324de50c134dcfcc3bb41208faad8f502276f94c7780cddac9b1 Aug 13 00:53:24.263574 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:53:24.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.263954 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:53:24.274273 unknown[647]: fetched base config from "system" Aug 13 00:53:24.274285 unknown[647]: fetched user config from "qemu" Aug 13 00:53:24.274990 ignition[647]: fetch-offline: fetch-offline passed Aug 13 00:53:24.275072 ignition[647]: Ignition finished successfully Aug 13 00:53:24.277470 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:53:24.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.279091 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:53:24.279949 systemd[1]: Starting ignition-kargs.service... Aug 13 00:53:24.289259 ignition[737]: Ignition 2.14.0 Aug 13 00:53:24.289269 ignition[737]: Stage: kargs Aug 13 00:53:24.289358 ignition[737]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:24.289367 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:24.290646 ignition[737]: kargs: kargs passed Aug 13 00:53:24.292940 systemd[1]: Finished ignition-kargs.service. Aug 13 00:53:24.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.290683 ignition[737]: Ignition finished successfully Aug 13 00:53:24.294938 systemd[1]: Starting ignition-disks.service... Aug 13 00:53:24.302492 ignition[743]: Ignition 2.14.0 Aug 13 00:53:24.302504 ignition[743]: Stage: disks Aug 13 00:53:24.302592 ignition[743]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:24.302604 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:24.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.304576 systemd[1]: Finished ignition-disks.service. Aug 13 00:53:24.303678 ignition[743]: disks: disks passed Aug 13 00:53:24.305729 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:53:24.303714 ignition[743]: Ignition finished successfully Aug 13 00:53:24.307510 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:53:24.308377 systemd[1]: Reached target local-fs.target. Aug 13 00:53:24.308798 systemd[1]: Reached target sysinit.target. Aug 13 00:53:24.309127 systemd[1]: Reached target basic.target. Aug 13 00:53:24.310200 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:53:24.323445 systemd-fsck[751]: ROOT: clean, 629/553520 files, 56027/553472 blocks Aug 13 00:53:24.328538 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:53:24.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.330279 systemd[1]: Mounting sysroot.mount... Aug 13 00:53:24.336631 systemd[1]: Mounted sysroot.mount. Aug 13 00:53:24.337155 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:53:24.339703 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:53:24.338660 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:53:24.340532 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 00:53:24.340576 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:53:24.340603 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:53:24.342559 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:53:24.346246 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:53:24.353703 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:53:24.357211 initrd-setup-root[769]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:53:24.361604 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:53:24.365607 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:53:24.395569 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:53:24.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.396679 systemd[1]: Starting ignition-mount.service... Aug 13 00:53:24.398997 systemd[1]: Starting sysroot-boot.service... Aug 13 00:53:24.403806 bash[802]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 00:53:24.418925 systemd[1]: Finished sysroot-boot.service. Aug 13 00:53:24.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:24.424640 ignition[803]: INFO : Ignition 2.14.0 Aug 13 00:53:24.424640 ignition[803]: INFO : Stage: mount Aug 13 00:53:24.426275 ignition[803]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:24.426275 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:24.429183 ignition[803]: INFO : mount: mount passed Aug 13 00:53:24.429957 ignition[803]: INFO : Ignition finished successfully Aug 13 00:53:24.431374 systemd[1]: Finished ignition-mount.service. Aug 13 00:53:24.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:25.027172 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:53:25.037980 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Aug 13 00:53:25.038018 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:53:25.038028 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:53:25.038905 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:53:25.043129 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:53:25.044201 systemd[1]: Starting ignition-files.service... Aug 13 00:53:25.061240 ignition[833]: INFO : Ignition 2.14.0 Aug 13 00:53:25.061240 ignition[833]: INFO : Stage: files Aug 13 00:53:25.062894 ignition[833]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:25.062894 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:25.066120 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:53:25.068170 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:53:25.068170 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:53:25.071616 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:53:25.073051 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:53:25.074822 unknown[833]: wrote ssh authorized keys file for user: core Aug 13 00:53:25.075918 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:53:25.077566 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:53:25.079299 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:53:25.080972 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:53:25.082845 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:53:25.134662 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:53:25.328038 systemd-networkd[716]: eth0: Gained IPv6LL Aug 13 00:53:25.564601 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:53:25.566959 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:53:25.566959 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:53:25.783073 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Aug 13 00:53:25.893424 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:53:25.895566 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:53:25.897591 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:53:25.899520 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:53:25.901619 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:53:25.903574 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:53:25.905564 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:53:25.907497 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:53:25.909175 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:53:25.911008 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:53:25.912781 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:53:25.914480 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:53:25.916932 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:53:25.916932 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:53:25.921522 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:53:26.300065 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Aug 13 00:53:27.031268 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:53:27.031268 ignition[833]: INFO : files: op(d): [started] processing unit "containerd.service" Aug 13 00:53:27.034779 ignition[833]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:53:27.037045 ignition[833]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:53:27.037045 ignition[833]: INFO : files: op(d): [finished] processing unit "containerd.service" Aug 13 00:53:27.037045 ignition[833]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Aug 13 00:53:27.041607 ignition[833]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:53:27.041607 ignition[833]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:53:27.041607 ignition[833]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Aug 13 00:53:27.041607 ignition[833]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Aug 13 00:53:27.041607 ignition[833]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:53:27.049611 ignition[833]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:53:27.049611 ignition[833]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Aug 13 00:53:27.049611 ignition[833]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:53:27.049611 ignition[833]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:53:27.049611 ignition[833]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:53:27.049611 ignition[833]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:53:27.082406 ignition[833]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:53:27.083996 ignition[833]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:53:27.083996 ignition[833]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:53:27.083996 ignition[833]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:53:27.083996 ignition[833]: INFO : files: files passed Aug 13 00:53:27.083996 ignition[833]: INFO : Ignition finished successfully Aug 13 00:53:27.114534 kernel: kauditd_printk_skb: 23 callbacks suppressed Aug 13 00:53:27.114558 kernel: audit: type=1130 audit(1755046407.086:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.114570 kernel: audit: type=1130 audit(1755046407.109:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.114582 kernel: audit: type=1130 audit(1755046407.114:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.084414 systemd[1]: Finished ignition-files.service. Aug 13 00:53:27.121973 kernel: audit: type=1131 audit(1755046407.114:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.087640 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:53:27.092494 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:53:27.125655 initrd-setup-root-after-ignition[857]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Aug 13 00:53:27.093235 systemd[1]: Starting ignition-quench.service... Aug 13 00:53:27.128052 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:53:27.096158 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:53:27.109935 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:53:27.110026 systemd[1]: Finished ignition-quench.service. Aug 13 00:53:27.114651 systemd[1]: Reached target ignition-complete.target. Aug 13 00:53:27.121284 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:53:27.141978 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:53:27.142064 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:53:27.150737 kernel: audit: type=1130 audit(1755046407.141:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.151473 kernel: audit: type=1131 audit(1755046407.141:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.142707 systemd[1]: Reached target initrd-fs.target. Aug 13 00:53:27.150723 systemd[1]: Reached target initrd.target. Aug 13 00:53:27.151492 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:53:27.152227 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:53:27.163031 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:53:27.192057 kernel: audit: type=1130 audit(1755046407.186:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.188525 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:53:27.199116 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:53:27.200168 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:53:27.201996 systemd[1]: Stopped target timers.target. Aug 13 00:53:27.203570 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:53:27.210560 kernel: audit: type=1131 audit(1755046407.204:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.203741 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:53:27.205352 systemd[1]: Stopped target initrd.target. Aug 13 00:53:27.210658 systemd[1]: Stopped target basic.target. Aug 13 00:53:27.212347 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:53:27.214073 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:53:27.215748 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:53:27.217700 systemd[1]: Stopped target remote-fs.target. Aug 13 00:53:27.219452 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:53:27.221278 systemd[1]: Stopped target sysinit.target. Aug 13 00:53:27.222946 systemd[1]: Stopped target local-fs.target. Aug 13 00:53:27.224630 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:53:27.226341 systemd[1]: Stopped target swap.target. Aug 13 00:53:27.233956 kernel: audit: type=1131 audit(1755046407.228:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.227914 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:53:27.228083 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:53:27.240419 kernel: audit: type=1131 audit(1755046407.235:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.229777 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:53:27.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.233999 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:53:27.234137 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:53:27.236052 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:53:27.236187 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:53:27.240583 systemd[1]: Stopped target paths.target. Aug 13 00:53:27.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.242186 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:53:27.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.245892 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:53:27.246432 systemd[1]: Stopped target slices.target. Aug 13 00:53:27.260204 ignition[874]: INFO : Ignition 2.14.0 Aug 13 00:53:27.260204 ignition[874]: INFO : Stage: umount Aug 13 00:53:27.260204 ignition[874]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:27.260204 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:27.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.246611 systemd[1]: Stopped target sockets.target. Aug 13 00:53:27.270548 ignition[874]: INFO : umount: umount passed Aug 13 00:53:27.270548 ignition[874]: INFO : Ignition finished successfully Aug 13 00:53:27.246797 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:53:27.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.246904 systemd[1]: Closed iscsid.socket. Aug 13 00:53:27.247203 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:53:27.247293 systemd[1]: Closed iscsiuio.socket. Aug 13 00:53:27.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.247600 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:53:27.278000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:53:27.247723 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:53:27.248185 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:53:27.248301 systemd[1]: Stopped ignition-files.service. Aug 13 00:53:27.249604 systemd[1]: Stopping ignition-mount.service... Aug 13 00:53:27.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.251051 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:53:27.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.251323 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:53:27.251507 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:53:27.251890 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:53:27.252069 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:53:27.256232 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:53:27.256334 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:53:27.261288 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:53:27.261374 systemd[1]: Stopped ignition-mount.service. Aug 13 00:53:27.262388 systemd[1]: Stopped target network.target. Aug 13 00:53:27.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.262516 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:53:27.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.262559 systemd[1]: Stopped ignition-disks.service. Aug 13 00:53:27.262902 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:53:27.262943 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:53:27.263058 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:53:27.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.263097 systemd[1]: Stopped ignition-setup.service. Aug 13 00:53:27.263334 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:53:27.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.263648 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:53:27.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.268845 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:53:27.271913 systemd-networkd[716]: eth0: DHCPv6 lease lost Aug 13 00:53:27.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.312000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:53:27.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.272661 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:53:27.272771 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:53:27.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.275517 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:53:27.275611 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:53:27.279347 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:53:27.279381 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:53:27.281636 systemd[1]: Stopping network-cleanup.service... Aug 13 00:53:27.282723 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:53:27.282765 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:53:27.284741 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:53:27.284776 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:53:27.286449 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:53:27.286484 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:53:27.287671 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:53:27.292351 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:53:27.295902 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:53:27.296025 systemd[1]: Stopped network-cleanup.service. Aug 13 00:53:27.297357 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:53:27.297464 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:53:27.299909 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:53:27.299955 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:53:27.301485 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:53:27.301511 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:53:27.303148 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:53:27.303185 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:53:27.304658 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:53:27.304692 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:53:27.306237 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:53:27.306271 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:53:27.308315 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:53:27.309295 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:53:27.309336 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 00:53:27.311068 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:53:27.311116 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:53:27.313113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:53:27.313164 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:53:27.315208 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:53:27.315734 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:53:27.315845 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:53:27.384317 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:53:27.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.384413 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:53:27.385325 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:53:27.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:27.386851 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:53:27.386900 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:53:27.389728 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:53:27.398919 systemd[1]: Switching root. Aug 13 00:53:27.400000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:53:27.400000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:53:27.400000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:53:27.402000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:53:27.402000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:53:27.419372 iscsid[722]: iscsid shutting down. Aug 13 00:53:27.420234 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Aug 13 00:53:27.420294 systemd-journald[197]: Journal stopped Aug 13 00:53:30.609769 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:53:30.609978 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:53:30.609997 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:53:30.610011 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:53:30.610024 kernel: SELinux: policy capability open_perms=1 Aug 13 00:53:30.610037 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:53:30.610053 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:53:30.610068 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:53:30.610082 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:53:30.610095 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:53:30.610108 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:53:30.610124 systemd[1]: Successfully loaded SELinux policy in 49.179ms. Aug 13 00:53:30.610153 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.056ms. Aug 13 00:53:30.610169 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:53:30.610184 systemd[1]: Detected virtualization kvm. Aug 13 00:53:30.610206 systemd[1]: Detected architecture x86-64. Aug 13 00:53:30.610220 systemd[1]: Detected first boot. Aug 13 00:53:30.610235 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:53:30.610249 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:53:30.610270 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:53:30.610285 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:53:30.610304 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:53:30.610318 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:53:30.610337 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:53:30.610352 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 00:53:30.610365 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:53:30.610379 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:53:30.610395 systemd[1]: Created slice system-getty.slice. Aug 13 00:53:30.610411 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:53:30.610425 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:53:30.610439 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:53:30.610453 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:53:30.610472 systemd[1]: Created slice user.slice. Aug 13 00:53:30.610486 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:53:30.610501 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:53:30.610515 systemd[1]: Set up automount boot.automount. Aug 13 00:53:30.610529 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:53:30.610545 systemd[1]: Reached target integritysetup.target. Aug 13 00:53:30.610560 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:53:30.610574 systemd[1]: Reached target remote-fs.target. Aug 13 00:53:30.610588 systemd[1]: Reached target slices.target. Aug 13 00:53:30.610602 systemd[1]: Reached target swap.target. Aug 13 00:53:30.610616 systemd[1]: Reached target torcx.target. Aug 13 00:53:30.610629 systemd[1]: Reached target veritysetup.target. Aug 13 00:53:30.610643 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:53:30.610659 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:53:30.610672 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:53:30.610686 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:53:30.610701 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:53:30.610715 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:53:30.610728 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:53:30.610741 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:53:30.610755 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:53:30.610769 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:53:30.610791 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:53:30.610805 systemd[1]: Mounting media.mount... Aug 13 00:53:30.610818 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:30.610848 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:53:30.610873 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:53:30.610888 systemd[1]: Mounting tmp.mount... Aug 13 00:53:30.610900 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:53:30.610914 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:30.610927 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:53:30.610944 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:53:30.610957 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:30.610970 systemd[1]: Starting modprobe@drm.service... Aug 13 00:53:30.610983 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:30.610996 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:53:30.611015 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:30.611032 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:53:30.611045 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:53:30.611059 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:53:30.611075 systemd[1]: Starting systemd-journald.service... Aug 13 00:53:30.611089 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:53:30.611103 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:53:30.611116 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:53:30.611129 kernel: loop: module loaded Aug 13 00:53:30.611142 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:53:30.611155 kernel: fuse: init (API version 7.34) Aug 13 00:53:30.611168 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:30.611181 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:53:30.611196 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:53:30.611209 systemd[1]: Mounted media.mount. Aug 13 00:53:30.611222 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:53:30.611236 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:53:30.611249 systemd[1]: Mounted tmp.mount. Aug 13 00:53:30.611262 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:53:30.611275 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:53:30.611292 systemd-journald[1018]: Journal started Aug 13 00:53:30.611459 systemd-journald[1018]: Runtime Journal (/run/log/journal/87099a11444e4748a21e0b5d6bd2cc1e) is 6.0M, max 48.5M, 42.5M free. Aug 13 00:53:30.511000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 00:53:30.607000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:53:30.607000 audit[1018]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffda562bee0 a2=4000 a3=7ffda562bf7c items=0 ppid=1 pid=1018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:30.607000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:53:30.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.614908 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:53:30.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.616868 systemd[1]: Started systemd-journald.service. Aug 13 00:53:30.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.618102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:30.618337 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:30.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.619689 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:53:30.620445 systemd[1]: Finished modprobe@drm.service. Aug 13 00:53:30.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.621815 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:53:30.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.623105 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:30.623274 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:30.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.624664 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:53:30.624927 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:53:30.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.626068 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:30.626300 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:30.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.627663 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:53:30.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.629281 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:53:30.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.630731 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:53:30.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.632141 systemd[1]: Reached target network-pre.target. Aug 13 00:53:30.634425 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:53:30.636610 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:53:30.637514 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:53:30.639480 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:53:30.642764 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:53:30.643915 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:30.645342 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:53:30.646607 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:30.648206 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:53:30.650938 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:53:30.654169 systemd-journald[1018]: Time spent on flushing to /var/log/journal/87099a11444e4748a21e0b5d6bd2cc1e is 24.217ms for 1035 entries. Aug 13 00:53:30.654169 systemd-journald[1018]: System Journal (/var/log/journal/87099a11444e4748a21e0b5d6bd2cc1e) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:53:30.687083 systemd-journald[1018]: Received client request to flush runtime journal. Aug 13 00:53:30.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.656075 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:53:30.688061 udevadm[1063]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:53:30.657500 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:53:30.662254 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:53:30.663617 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:53:30.665182 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:53:30.670432 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:53:30.673223 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:53:30.681744 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:53:30.684342 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:53:30.688208 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:53:30.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:30.703962 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:53:30.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.108968 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:53:31.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.111286 systemd[1]: Starting systemd-udevd.service... Aug 13 00:53:31.137750 systemd-udevd[1072]: Using default interface naming scheme 'v252'. Aug 13 00:53:31.150762 systemd[1]: Started systemd-udevd.service. Aug 13 00:53:31.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.153104 systemd[1]: Starting systemd-networkd.service... Aug 13 00:53:31.159411 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:53:31.193186 systemd[1]: Started systemd-userdbd.service. Aug 13 00:53:31.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.195073 systemd[1]: Found device dev-ttyS0.device. Aug 13 00:53:31.213672 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:53:31.218915 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 00:53:31.223884 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:53:31.248004 systemd-networkd[1078]: lo: Link UP Aug 13 00:53:31.248016 systemd-networkd[1078]: lo: Gained carrier Aug 13 00:53:31.248402 systemd-networkd[1078]: Enumeration completed Aug 13 00:53:31.248507 systemd[1]: Started systemd-networkd.service. Aug 13 00:53:31.248922 systemd-networkd[1078]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:53:31.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.250397 systemd-networkd[1078]: eth0: Link UP Aug 13 00:53:31.250407 systemd-networkd[1078]: eth0: Gained carrier Aug 13 00:53:31.252000 audit[1086]: AVC avc: denied { confidentiality } for pid=1086 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:53:31.263976 systemd-networkd[1078]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:53:31.252000 audit[1086]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557b25528600 a1=338ac a2=7f355b771bc5 a3=5 items=110 ppid=1072 pid=1086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:31.252000 audit: CWD cwd="/" Aug 13 00:53:31.252000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=1 name=(null) inode=15748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=2 name=(null) inode=15748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=3 name=(null) inode=15749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=4 name=(null) inode=15748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=5 name=(null) inode=15750 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=6 name=(null) inode=15748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=7 name=(null) inode=15751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=8 name=(null) inode=15751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=9 name=(null) inode=15752 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=10 name=(null) inode=15751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=11 name=(null) inode=15753 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=12 name=(null) inode=15751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=13 name=(null) inode=15754 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=14 name=(null) inode=15751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=15 name=(null) inode=15755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=16 name=(null) inode=15751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=17 name=(null) inode=15756 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=18 name=(null) inode=15748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=19 name=(null) inode=15757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=20 name=(null) inode=15757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=21 name=(null) inode=15758 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=22 name=(null) inode=15757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=23 name=(null) inode=15759 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=24 name=(null) inode=15757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=25 name=(null) inode=15760 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=26 name=(null) inode=15757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=27 name=(null) inode=15761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=28 name=(null) inode=15757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=29 name=(null) inode=15762 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=30 name=(null) inode=15748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=31 name=(null) inode=15763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=32 name=(null) inode=15763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=33 name=(null) inode=15764 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=34 name=(null) inode=15763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=35 name=(null) inode=15765 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=36 name=(null) inode=15763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=37 name=(null) inode=15766 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.268945 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:53:31.269110 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 00:53:31.269222 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:53:31.252000 audit: PATH item=38 name=(null) inode=15763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=39 name=(null) inode=15767 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=40 name=(null) inode=15763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=41 name=(null) inode=15768 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=42 name=(null) inode=15748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=43 name=(null) inode=15769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=44 name=(null) inode=15769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=45 name=(null) inode=15770 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=46 name=(null) inode=15769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=47 name=(null) inode=15771 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=48 name=(null) inode=15769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=49 name=(null) inode=15772 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=50 name=(null) inode=15769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=51 name=(null) inode=15773 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=52 name=(null) inode=15769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=53 name=(null) inode=15774 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=55 name=(null) inode=15775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=56 name=(null) inode=15775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=57 name=(null) inode=15776 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=58 name=(null) inode=15775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=59 name=(null) inode=15777 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=60 name=(null) inode=15775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=61 name=(null) inode=15778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=62 name=(null) inode=15778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=63 name=(null) inode=15779 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=64 name=(null) inode=15778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=65 name=(null) inode=15780 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=66 name=(null) inode=15778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=67 name=(null) inode=15781 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=68 name=(null) inode=15778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=69 name=(null) inode=15782 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=70 name=(null) inode=15778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=71 name=(null) inode=15783 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=72 name=(null) inode=15775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=73 name=(null) inode=15784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=74 name=(null) inode=15784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=75 name=(null) inode=15785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=76 name=(null) inode=15784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=77 name=(null) inode=15786 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=78 name=(null) inode=15784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=79 name=(null) inode=15787 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=80 name=(null) inode=15784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=81 name=(null) inode=15788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=82 name=(null) inode=15784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=83 name=(null) inode=15789 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=84 name=(null) inode=15775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=85 name=(null) inode=15790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=86 name=(null) inode=15790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=87 name=(null) inode=15791 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=88 name=(null) inode=15790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=89 name=(null) inode=15792 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=90 name=(null) inode=15790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=91 name=(null) inode=15793 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=92 name=(null) inode=15790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=93 name=(null) inode=15794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=94 name=(null) inode=15790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=95 name=(null) inode=15795 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=96 name=(null) inode=15775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=97 name=(null) inode=15796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=98 name=(null) inode=15796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=99 name=(null) inode=15797 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=100 name=(null) inode=15796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=101 name=(null) inode=15798 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=102 name=(null) inode=15796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=103 name=(null) inode=15799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=104 name=(null) inode=15796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=105 name=(null) inode=15800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=106 name=(null) inode=15796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=107 name=(null) inode=15801 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PATH item=109 name=(null) inode=15802 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:31.252000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:53:31.290856 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:53:31.292858 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:53:31.325180 kernel: kvm: Nested Virtualization enabled Aug 13 00:53:31.325272 kernel: SVM: kvm: Nested Paging enabled Aug 13 00:53:31.326497 kernel: SVM: Virtual VMLOAD VMSAVE supported Aug 13 00:53:31.326530 kernel: SVM: Virtual GIF supported Aug 13 00:53:31.343855 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:53:31.371201 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:53:31.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.373240 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:53:31.381309 lvm[1108]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:53:31.405970 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:53:31.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.406962 systemd[1]: Reached target cryptsetup.target. Aug 13 00:53:31.408808 systemd[1]: Starting lvm2-activation.service... Aug 13 00:53:31.413034 lvm[1111]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:53:31.444353 systemd[1]: Finished lvm2-activation.service. Aug 13 00:53:31.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.445408 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:53:31.446296 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:53:31.446313 systemd[1]: Reached target local-fs.target. Aug 13 00:53:31.447124 systemd[1]: Reached target machines.target. Aug 13 00:53:31.449242 systemd[1]: Starting ldconfig.service... Aug 13 00:53:31.450257 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:31.450310 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:31.451363 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:53:31.453324 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:53:31.455369 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:53:31.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.457359 systemd[1]: Starting systemd-sysext.service... Aug 13 00:53:31.458464 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1114 (bootctl) Aug 13 00:53:31.459404 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:53:31.463493 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:53:31.470602 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:53:31.475915 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:53:31.476180 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:53:31.486866 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 00:53:31.507822 systemd-fsck[1123]: fsck.fat 4.2 (2021-01-31) Aug 13 00:53:31.507822 systemd-fsck[1123]: /dev/vda1: 789 files, 119324/258078 clusters Aug 13 00:53:31.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.509329 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:53:31.512623 systemd[1]: Mounting boot.mount... Aug 13 00:53:31.742123 systemd[1]: Mounted boot.mount. Aug 13 00:53:31.754318 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:53:31.754594 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:53:31.755219 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:53:31.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.759135 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:53:31.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.772863 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 00:53:31.777536 (sd-sysext)[1135]: Using extensions 'kubernetes'. Aug 13 00:53:31.778037 (sd-sysext)[1135]: Merged extensions into '/usr'. Aug 13 00:53:31.794680 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:31.796153 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:53:31.797310 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:31.798969 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:31.801208 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:31.803731 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:31.804777 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:31.804911 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:31.805028 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:31.808450 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:53:31.809723 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:31.809935 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:31.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.811308 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:31.811461 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:31.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.812875 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:31.813033 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:31.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.814458 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:31.814553 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:31.816092 systemd[1]: Finished systemd-sysext.service. Aug 13 00:53:31.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:31.819071 systemd[1]: Starting ensure-sysext.service... Aug 13 00:53:31.821295 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:53:31.825214 ldconfig[1113]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:53:31.826461 systemd[1]: Reloading. Aug 13 00:53:31.830972 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:53:31.831779 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:53:31.833316 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:53:31.870977 /usr/lib/systemd/system-generators/torcx-generator[1170]: time="2025-08-13T00:53:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:53:31.871329 /usr/lib/systemd/system-generators/torcx-generator[1170]: time="2025-08-13T00:53:31Z" level=info msg="torcx already run" Aug 13 00:53:31.953547 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:53:31.953565 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:53:31.972729 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:53:32.023158 systemd[1]: Finished ldconfig.service. Aug 13 00:53:32.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:32.025182 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:53:32.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:32.028053 systemd[1]: Starting audit-rules.service... Aug 13 00:53:32.029724 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:53:32.031643 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:53:32.034158 systemd[1]: Starting systemd-resolved.service... Aug 13 00:53:32.036502 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:53:32.038338 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:53:32.040432 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:53:32.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:32.042000 audit[1231]: SYSTEM_BOOT pid=1231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:53:32.046574 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:32.046800 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:32.048015 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:32.049933 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:32.052895 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:32.055028 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:32.055132 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:32.055222 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:53:32.055285 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:32.057665 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:53:32.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:32.061739 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:53:32.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:32.063012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:32.063172 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:32.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:32.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:32.064395 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:32.064539 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:32.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:32.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:32.065748 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:32.066165 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:32.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:32.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:32.068056 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:32.068166 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:32.069719 systemd[1]: Starting systemd-update-done.service... Aug 13 00:53:32.072160 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:32.072848 augenrules[1249]: No rules Aug 13 00:53:32.072516 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:32.071000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:53:32.071000 audit[1249]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffee19dc340 a2=420 a3=0 items=0 ppid=1219 pid=1249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:32.071000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:53:32.073716 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:32.075451 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:32.077222 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:32.077992 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:32.078112 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:32.078212 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:53:32.078289 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:32.079169 systemd[1]: Finished audit-rules.service. Aug 13 00:53:32.080658 systemd[1]: Finished systemd-update-done.service. Aug 13 00:53:32.081908 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:32.082054 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:32.083311 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:32.083447 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:32.084698 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:32.084874 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:32.086042 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:32.086129 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:32.089108 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:32.089315 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:32.090503 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:32.092625 systemd[1]: Starting modprobe@drm.service... Aug 13 00:53:32.094671 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:32.096749 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:32.097761 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:32.097912 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:32.103097 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:53:32.104162 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:53:32.104261 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:32.105333 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:32.105484 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:32.106652 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:53:32.106845 systemd[1]: Finished modprobe@drm.service. Aug 13 00:53:32.108224 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:32.108397 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:32.109642 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:32.109787 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:32.111059 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:32.111173 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:32.113662 systemd[1]: Finished ensure-sysext.service. Aug 13 00:53:32.126590 systemd-resolved[1224]: Positive Trust Anchors: Aug 13 00:53:32.126604 systemd-resolved[1224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:53:32.126630 systemd-resolved[1224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:53:32.131596 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:53:32.132922 systemd[1]: Reached target time-set.target. Aug 13 00:53:32.133245 systemd-resolved[1224]: Defaulting to hostname 'linux'. Aug 13 00:53:32.133824 systemd-timesyncd[1230]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:53:32.134137 systemd-timesyncd[1230]: Initial clock synchronization to Wed 2025-08-13 00:53:32.429304 UTC. Aug 13 00:53:32.134618 systemd[1]: Started systemd-resolved.service. Aug 13 00:53:32.135529 systemd[1]: Reached target network.target. Aug 13 00:53:32.136341 systemd[1]: Reached target nss-lookup.target. Aug 13 00:53:32.137183 systemd[1]: Reached target sysinit.target. Aug 13 00:53:32.138071 systemd[1]: Started motdgen.path. Aug 13 00:53:32.138817 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:53:32.140138 systemd[1]: Started logrotate.timer. Aug 13 00:53:32.141033 systemd[1]: Started mdadm.timer. Aug 13 00:53:32.141725 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:53:32.142617 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:53:32.142641 systemd[1]: Reached target paths.target. Aug 13 00:53:32.143412 systemd[1]: Reached target timers.target. Aug 13 00:53:32.144587 systemd[1]: Listening on dbus.socket. Aug 13 00:53:32.146860 systemd[1]: Starting docker.socket... Aug 13 00:53:32.148710 systemd[1]: Listening on sshd.socket. Aug 13 00:53:32.149605 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:32.150053 systemd[1]: Listening on docker.socket. Aug 13 00:53:32.150887 systemd[1]: Reached target sockets.target. Aug 13 00:53:32.151704 systemd[1]: Reached target basic.target. Aug 13 00:53:32.152645 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:53:32.152699 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:53:32.152725 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:53:32.154129 systemd[1]: Starting containerd.service... Aug 13 00:53:32.156158 systemd[1]: Starting dbus.service... Aug 13 00:53:32.158113 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:53:32.160917 systemd[1]: Starting extend-filesystems.service... Aug 13 00:53:32.161861 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:53:32.163003 systemd[1]: Starting motdgen.service... Aug 13 00:53:32.164138 jq[1282]: false Aug 13 00:53:32.165522 systemd[1]: Starting prepare-helm.service... Aug 13 00:53:32.167800 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:53:32.170158 systemd[1]: Starting sshd-keygen.service... Aug 13 00:53:32.181085 dbus-daemon[1281]: [system] SELinux support is enabled Aug 13 00:53:32.173592 systemd[1]: Starting systemd-logind.service... Aug 13 00:53:32.174423 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:32.174489 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:53:32.175651 systemd[1]: Starting update-engine.service... Aug 13 00:53:32.178119 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:53:32.197785 jq[1302]: true Aug 13 00:53:32.180472 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:53:32.180717 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:53:32.181559 systemd[1]: Started dbus.service. Aug 13 00:53:32.186563 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:53:32.188878 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:53:32.190358 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:53:32.190608 systemd[1]: Finished motdgen.service. Aug 13 00:53:32.193492 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:53:32.193524 systemd[1]: Reached target system-config.target. Aug 13 00:53:32.194538 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:53:32.194550 systemd[1]: Reached target user-config.target. Aug 13 00:53:32.201136 jq[1310]: true Aug 13 00:53:32.206922 tar[1306]: linux-amd64/helm Aug 13 00:53:32.210493 extend-filesystems[1283]: Found loop1 Aug 13 00:53:32.211507 extend-filesystems[1283]: Found sr0 Aug 13 00:53:32.211507 extend-filesystems[1283]: Found vda Aug 13 00:53:32.211507 extend-filesystems[1283]: Found vda1 Aug 13 00:53:32.211507 extend-filesystems[1283]: Found vda2 Aug 13 00:53:32.211507 extend-filesystems[1283]: Found vda3 Aug 13 00:53:32.211507 extend-filesystems[1283]: Found usr Aug 13 00:53:32.211507 extend-filesystems[1283]: Found vda4 Aug 13 00:53:32.211507 extend-filesystems[1283]: Found vda6 Aug 13 00:53:32.211507 extend-filesystems[1283]: Found vda7 Aug 13 00:53:32.211507 extend-filesystems[1283]: Found vda9 Aug 13 00:53:32.211507 extend-filesystems[1283]: Checking size of /dev/vda9 Aug 13 00:53:32.225027 extend-filesystems[1283]: Resized partition /dev/vda9 Aug 13 00:53:32.227293 env[1311]: time="2025-08-13T00:53:32.224248754Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:53:32.227530 extend-filesystems[1326]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 00:53:32.228668 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:53:32.237092 update_engine[1297]: I0813 00:53:32.236927 1297 main.cc:92] Flatcar Update Engine starting Aug 13 00:53:32.238762 systemd[1]: Started update-engine.service. Aug 13 00:53:32.238894 update_engine[1297]: I0813 00:53:32.238790 1297 update_check_scheduler.cc:74] Next update check in 3m22s Aug 13 00:53:32.241551 systemd[1]: Started locksmithd.service. Aug 13 00:53:32.251878 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:53:32.275762 env[1311]: time="2025-08-13T00:53:32.270348761Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:53:32.276658 extend-filesystems[1326]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:53:32.276658 extend-filesystems[1326]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:53:32.276658 extend-filesystems[1326]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:53:32.282912 env[1311]: time="2025-08-13T00:53:32.276130763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:32.282912 env[1311]: time="2025-08-13T00:53:32.278430354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:53:32.282912 env[1311]: time="2025-08-13T00:53:32.280609761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:32.282912 env[1311]: time="2025-08-13T00:53:32.281605347Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:53:32.282912 env[1311]: time="2025-08-13T00:53:32.281630915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:32.282912 env[1311]: time="2025-08-13T00:53:32.281647767Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:53:32.282912 env[1311]: time="2025-08-13T00:53:32.281659499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:32.282912 env[1311]: time="2025-08-13T00:53:32.281748446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:32.282912 env[1311]: time="2025-08-13T00:53:32.282028210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:32.282912 env[1311]: time="2025-08-13T00:53:32.282215962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:53:32.277269 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:53:32.283194 extend-filesystems[1283]: Resized filesystem in /dev/vda9 Aug 13 00:53:32.284355 env[1311]: time="2025-08-13T00:53:32.282235369Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:53:32.277546 systemd[1]: Finished extend-filesystems.service. Aug 13 00:53:32.284980 env[1311]: time="2025-08-13T00:53:32.282324586Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:53:32.285267 env[1311]: time="2025-08-13T00:53:32.285244371Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:53:32.285873 bash[1343]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:53:32.288919 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:53:32.289884 systemd-logind[1294]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 00:53:32.290388 systemd-logind[1294]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:53:32.291715 systemd-logind[1294]: New seat seat0. Aug 13 00:53:32.292343 env[1311]: time="2025-08-13T00:53:32.292207366Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:53:32.292343 env[1311]: time="2025-08-13T00:53:32.292269733Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:53:32.292343 env[1311]: time="2025-08-13T00:53:32.292283800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:53:32.292343 env[1311]: time="2025-08-13T00:53:32.292337470Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:53:32.292465 env[1311]: time="2025-08-13T00:53:32.292353681Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:53:32.292465 env[1311]: time="2025-08-13T00:53:32.292366825Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:53:32.292465 env[1311]: time="2025-08-13T00:53:32.292378527Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:53:32.292465 env[1311]: time="2025-08-13T00:53:32.292390981Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:53:32.292465 env[1311]: time="2025-08-13T00:53:32.292418853Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:53:32.292465 env[1311]: time="2025-08-13T00:53:32.292431597Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:53:32.292465 env[1311]: time="2025-08-13T00:53:32.292443249Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:53:32.292465 env[1311]: time="2025-08-13T00:53:32.292454630Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:53:32.292625 env[1311]: time="2025-08-13T00:53:32.292587429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:53:32.292720 env[1311]: time="2025-08-13T00:53:32.292685994Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:53:32.293217 env[1311]: time="2025-08-13T00:53:32.293107484Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:53:32.293217 env[1311]: time="2025-08-13T00:53:32.293135076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:53:32.293217 env[1311]: time="2025-08-13T00:53:32.293160774Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:53:32.293217 env[1311]: time="2025-08-13T00:53:32.293203624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:53:32.293326 env[1311]: time="2025-08-13T00:53:32.293215196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:53:32.293326 env[1311]: time="2025-08-13T00:53:32.293240794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:53:32.293326 env[1311]: time="2025-08-13T00:53:32.293250973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:53:32.293326 env[1311]: time="2025-08-13T00:53:32.293262364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:53:32.293326 env[1311]: time="2025-08-13T00:53:32.293273926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:53:32.293326 env[1311]: time="2025-08-13T00:53:32.293283995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:53:32.293326 env[1311]: time="2025-08-13T00:53:32.293307168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:53:32.293326 env[1311]: time="2025-08-13T00:53:32.293322808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:53:32.293525 env[1311]: time="2025-08-13T00:53:32.293486114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:53:32.293525 env[1311]: time="2025-08-13T00:53:32.293505751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:53:32.293525 env[1311]: time="2025-08-13T00:53:32.293519306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:53:32.293590 env[1311]: time="2025-08-13T00:53:32.293546627Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:53:32.293590 env[1311]: time="2025-08-13T00:53:32.293561365Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:53:32.293590 env[1311]: time="2025-08-13T00:53:32.293571454Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:53:32.293590 env[1311]: time="2025-08-13T00:53:32.293587254Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:53:32.293672 env[1311]: time="2025-08-13T00:53:32.293633560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:53:32.294125 env[1311]: time="2025-08-13T00:53:32.293881225Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:53:32.294125 env[1311]: time="2025-08-13T00:53:32.293954743Z" level=info msg="Connect containerd service" Aug 13 00:53:32.294125 env[1311]: time="2025-08-13T00:53:32.294000178Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:53:32.294743 env[1311]: time="2025-08-13T00:53:32.294531504Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:53:32.294743 env[1311]: time="2025-08-13T00:53:32.294733403Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:53:32.294789 env[1311]: time="2025-08-13T00:53:32.294763158Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:53:32.294939 env[1311]: time="2025-08-13T00:53:32.294838500Z" level=info msg="containerd successfully booted in 0.081468s" Aug 13 00:53:32.294906 systemd[1]: Started containerd.service. Aug 13 00:53:32.295446 env[1311]: time="2025-08-13T00:53:32.295114447Z" level=info msg="Start subscribing containerd event" Aug 13 00:53:32.295446 env[1311]: time="2025-08-13T00:53:32.295189157Z" level=info msg="Start recovering state" Aug 13 00:53:32.295446 env[1311]: time="2025-08-13T00:53:32.295258637Z" level=info msg="Start event monitor" Aug 13 00:53:32.295446 env[1311]: time="2025-08-13T00:53:32.295272233Z" level=info msg="Start snapshots syncer" Aug 13 00:53:32.295446 env[1311]: time="2025-08-13T00:53:32.295281250Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:53:32.295446 env[1311]: time="2025-08-13T00:53:32.295289014Z" level=info msg="Start streaming server" Aug 13 00:53:32.298150 systemd[1]: Started systemd-logind.service. Aug 13 00:53:32.317532 locksmithd[1333]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:53:32.432069 systemd-networkd[1078]: eth0: Gained IPv6LL Aug 13 00:53:32.434091 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:53:32.435572 systemd[1]: Reached target network-online.target. Aug 13 00:53:32.438546 systemd[1]: Starting kubelet.service... Aug 13 00:53:32.653163 sshd_keygen[1308]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:53:32.721340 systemd[1]: Finished sshd-keygen.service. Aug 13 00:53:32.723808 systemd[1]: Starting issuegen.service... Aug 13 00:53:32.731175 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:53:32.731606 systemd[1]: Finished issuegen.service. Aug 13 00:53:32.735314 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:53:32.742941 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:53:32.745463 systemd[1]: Started getty@tty1.service. Aug 13 00:53:32.747590 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 00:53:32.748725 systemd[1]: Reached target getty.target. Aug 13 00:53:33.050568 tar[1306]: linux-amd64/LICENSE Aug 13 00:53:33.050773 tar[1306]: linux-amd64/README.md Aug 13 00:53:33.055638 systemd[1]: Finished prepare-helm.service. Aug 13 00:53:34.486006 systemd[1]: Started kubelet.service. Aug 13 00:53:34.487467 systemd[1]: Reached target multi-user.target. Aug 13 00:53:34.489999 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:53:34.496522 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:53:34.496797 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:53:34.502973 systemd[1]: Startup finished in 6.435s (kernel) + 7.037s (userspace) = 13.473s. Aug 13 00:53:35.198135 kubelet[1382]: E0813 00:53:35.198047 1382 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:35.199785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:35.199966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:35.295357 systemd[1]: Created slice system-sshd.slice. Aug 13 00:53:35.297034 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:54376.service. Aug 13 00:53:35.340621 sshd[1393]: Accepted publickey for core from 10.0.0.1 port 54376 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:53:35.342025 sshd[1393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:35.351726 systemd-logind[1294]: New session 1 of user core. Aug 13 00:53:35.352746 systemd[1]: Created slice user-500.slice. Aug 13 00:53:35.354253 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:53:35.364274 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:53:35.366086 systemd[1]: Starting user@500.service... Aug 13 00:53:35.368605 (systemd)[1398]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:35.441549 systemd[1398]: Queued start job for default target default.target. Aug 13 00:53:35.441772 systemd[1398]: Reached target paths.target. Aug 13 00:53:35.441802 systemd[1398]: Reached target sockets.target. Aug 13 00:53:35.441816 systemd[1398]: Reached target timers.target. Aug 13 00:53:35.441827 systemd[1398]: Reached target basic.target. Aug 13 00:53:35.441878 systemd[1398]: Reached target default.target. Aug 13 00:53:35.441899 systemd[1398]: Startup finished in 68ms. Aug 13 00:53:35.442003 systemd[1]: Started user@500.service. Aug 13 00:53:35.442978 systemd[1]: Started session-1.scope. Aug 13 00:53:35.495111 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:54392.service. Aug 13 00:53:35.557650 sshd[1407]: Accepted publickey for core from 10.0.0.1 port 54392 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:53:35.559282 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:35.562979 systemd-logind[1294]: New session 2 of user core. Aug 13 00:53:35.563780 systemd[1]: Started session-2.scope. Aug 13 00:53:35.616949 sshd[1407]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:35.620262 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:54396.service. Aug 13 00:53:35.620899 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:54392.service: Deactivated successfully. Aug 13 00:53:35.622007 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:53:35.622414 systemd-logind[1294]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:53:35.623338 systemd-logind[1294]: Removed session 2. Aug 13 00:53:35.661939 sshd[1413]: Accepted publickey for core from 10.0.0.1 port 54396 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:53:35.663006 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:35.666260 systemd-logind[1294]: New session 3 of user core. Aug 13 00:53:35.667186 systemd[1]: Started session-3.scope. Aug 13 00:53:35.718333 sshd[1413]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:35.721848 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:54404.service. Aug 13 00:53:35.722697 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:54396.service: Deactivated successfully. Aug 13 00:53:35.724273 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:53:35.724353 systemd-logind[1294]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:53:35.725573 systemd-logind[1294]: Removed session 3. Aug 13 00:53:35.766598 sshd[1420]: Accepted publickey for core from 10.0.0.1 port 54404 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:53:35.768438 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:35.773431 systemd-logind[1294]: New session 4 of user core. Aug 13 00:53:35.774533 systemd[1]: Started session-4.scope. Aug 13 00:53:35.864266 sshd[1420]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:35.866769 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:54406.service. Aug 13 00:53:35.867480 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:54404.service: Deactivated successfully. Aug 13 00:53:35.868262 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:53:35.868336 systemd-logind[1294]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:53:35.869236 systemd-logind[1294]: Removed session 4. Aug 13 00:53:35.908292 sshd[1426]: Accepted publickey for core from 10.0.0.1 port 54406 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:53:35.909612 sshd[1426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:35.913473 systemd-logind[1294]: New session 5 of user core. Aug 13 00:53:35.914280 systemd[1]: Started session-5.scope. Aug 13 00:53:35.994733 sudo[1432]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:53:35.994962 sudo[1432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:53:36.030773 systemd[1]: Starting docker.service... Aug 13 00:53:36.112390 env[1444]: time="2025-08-13T00:53:36.112315425Z" level=info msg="Starting up" Aug 13 00:53:36.114121 env[1444]: time="2025-08-13T00:53:36.114088800Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:53:36.114201 env[1444]: time="2025-08-13T00:53:36.114181293Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:53:36.114340 env[1444]: time="2025-08-13T00:53:36.114301629Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:53:36.114340 env[1444]: time="2025-08-13T00:53:36.114324381Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:53:36.116290 env[1444]: time="2025-08-13T00:53:36.116257752Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:53:36.116290 env[1444]: time="2025-08-13T00:53:36.116280565Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:53:36.116379 env[1444]: time="2025-08-13T00:53:36.116298851Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:53:36.116379 env[1444]: time="2025-08-13T00:53:36.116323244Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:53:36.843690 env[1444]: time="2025-08-13T00:53:36.843613578Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 00:53:36.843690 env[1444]: time="2025-08-13T00:53:36.843650488Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 00:53:36.844007 env[1444]: time="2025-08-13T00:53:36.843917392Z" level=info msg="Loading containers: start." Aug 13 00:53:36.961899 kernel: Initializing XFRM netlink socket Aug 13 00:53:36.991428 env[1444]: time="2025-08-13T00:53:36.991368934Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:53:37.046064 systemd-networkd[1078]: docker0: Link UP Aug 13 00:53:37.061425 env[1444]: time="2025-08-13T00:53:37.061355205Z" level=info msg="Loading containers: done." Aug 13 00:53:37.082101 env[1444]: time="2025-08-13T00:53:37.082036173Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:53:37.082305 env[1444]: time="2025-08-13T00:53:37.082281658Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:53:37.082430 env[1444]: time="2025-08-13T00:53:37.082400587Z" level=info msg="Daemon has completed initialization" Aug 13 00:53:37.102498 systemd[1]: Started docker.service. Aug 13 00:53:37.108872 env[1444]: time="2025-08-13T00:53:37.108797454Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:53:38.326300 env[1311]: time="2025-08-13T00:53:38.326240178Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:53:39.068201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount349917946.mount: Deactivated successfully. Aug 13 00:53:41.596028 env[1311]: time="2025-08-13T00:53:41.595969806Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:41.597821 env[1311]: time="2025-08-13T00:53:41.597794761Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:41.599727 env[1311]: time="2025-08-13T00:53:41.599700194Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:41.603144 env[1311]: time="2025-08-13T00:53:41.603105779Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:41.603707 env[1311]: time="2025-08-13T00:53:41.603676084Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:53:41.604345 env[1311]: time="2025-08-13T00:53:41.604322371Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:53:44.054519 env[1311]: time="2025-08-13T00:53:44.054431264Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:44.056573 env[1311]: time="2025-08-13T00:53:44.056529722Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:44.059942 env[1311]: time="2025-08-13T00:53:44.059883528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:44.061972 env[1311]: time="2025-08-13T00:53:44.061916733Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:44.062946 env[1311]: time="2025-08-13T00:53:44.062896992Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:53:44.063548 env[1311]: time="2025-08-13T00:53:44.063511011Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:53:45.451517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:53:45.451814 systemd[1]: Stopped kubelet.service. Aug 13 00:53:45.453507 systemd[1]: Starting kubelet.service... Aug 13 00:53:45.596198 systemd[1]: Started kubelet.service. Aug 13 00:53:45.643485 kubelet[1583]: E0813 00:53:45.643399 1583 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:45.646364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:45.646508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:46.766057 env[1311]: time="2025-08-13T00:53:46.765965991Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:46.768609 env[1311]: time="2025-08-13T00:53:46.768569221Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:46.770471 env[1311]: time="2025-08-13T00:53:46.770429308Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:46.772304 env[1311]: time="2025-08-13T00:53:46.772283689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:46.773225 env[1311]: time="2025-08-13T00:53:46.773181792Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:53:46.773889 env[1311]: time="2025-08-13T00:53:46.773865598Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:53:48.907543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1629091206.mount: Deactivated successfully. Aug 13 00:53:49.757415 env[1311]: time="2025-08-13T00:53:49.757340373Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:49.759927 env[1311]: time="2025-08-13T00:53:49.759880397Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:49.761602 env[1311]: time="2025-08-13T00:53:49.761563448Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:49.763200 env[1311]: time="2025-08-13T00:53:49.763157014Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:49.763562 env[1311]: time="2025-08-13T00:53:49.763525915Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:53:49.764102 env[1311]: time="2025-08-13T00:53:49.764069379Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:53:52.855389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618408953.mount: Deactivated successfully. Aug 13 00:53:55.274774 env[1311]: time="2025-08-13T00:53:55.274686389Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:55.276960 env[1311]: time="2025-08-13T00:53:55.276895609Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:55.279305 env[1311]: time="2025-08-13T00:53:55.279242496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:55.281762 env[1311]: time="2025-08-13T00:53:55.281673836Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:55.282967 env[1311]: time="2025-08-13T00:53:55.282913147Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:53:55.283490 env[1311]: time="2025-08-13T00:53:55.283459708Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:53:55.844012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:53:55.844251 systemd[1]: Stopped kubelet.service. Aug 13 00:53:55.845890 systemd[1]: Starting kubelet.service... Aug 13 00:53:55.935390 systemd[1]: Started kubelet.service. Aug 13 00:53:56.351750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63735728.mount: Deactivated successfully. Aug 13 00:53:56.357148 env[1311]: time="2025-08-13T00:53:56.357087273Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:56.359035 env[1311]: time="2025-08-13T00:53:56.359006387Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:56.360583 env[1311]: time="2025-08-13T00:53:56.360550748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:56.362210 env[1311]: time="2025-08-13T00:53:56.362182393Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:53:56.362760 env[1311]: time="2025-08-13T00:53:56.362726833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:53:56.363955 env[1311]: time="2025-08-13T00:53:56.363688015Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:53:56.376945 kubelet[1599]: E0813 00:53:56.376881 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:56.379112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:56.379292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:56.966120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255709601.mount: Deactivated successfully. Aug 13 00:54:01.668072 env[1311]: time="2025-08-13T00:54:01.668010590Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:01.670354 env[1311]: time="2025-08-13T00:54:01.670309113Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:01.672195 env[1311]: time="2025-08-13T00:54:01.672157459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:01.674234 env[1311]: time="2025-08-13T00:54:01.674192587Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:01.675387 env[1311]: time="2025-08-13T00:54:01.675328432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:54:04.283982 systemd[1]: Stopped kubelet.service. Aug 13 00:54:04.286002 systemd[1]: Starting kubelet.service... Aug 13 00:54:04.305512 systemd[1]: Reloading. Aug 13 00:54:04.453143 /usr/lib/systemd/system-generators/torcx-generator[1656]: time="2025-08-13T00:54:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:04.453170 /usr/lib/systemd/system-generators/torcx-generator[1656]: time="2025-08-13T00:54:04Z" level=info msg="torcx already run" Aug 13 00:54:05.343608 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:05.343624 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:05.362822 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:05.433247 systemd[1]: Started kubelet.service. Aug 13 00:54:05.437180 systemd[1]: Stopping kubelet.service... Aug 13 00:54:05.470749 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:54:05.471043 systemd[1]: Stopped kubelet.service. Aug 13 00:54:05.472997 systemd[1]: Starting kubelet.service... Aug 13 00:54:05.566535 systemd[1]: Started kubelet.service. Aug 13 00:54:05.703168 kubelet[1717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:05.703168 kubelet[1717]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:54:05.703168 kubelet[1717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:05.704355 kubelet[1717]: I0813 00:54:05.704134 1717 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:54:05.920345 kubelet[1717]: I0813 00:54:05.920294 1717 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:54:05.920345 kubelet[1717]: I0813 00:54:05.920330 1717 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:54:05.920899 kubelet[1717]: I0813 00:54:05.920879 1717 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:54:05.951408 kubelet[1717]: E0813 00:54:05.951367 1717 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:05.954509 kubelet[1717]: I0813 00:54:05.954206 1717 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:54:05.959507 kubelet[1717]: E0813 00:54:05.959462 1717 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:54:05.959507 kubelet[1717]: I0813 00:54:05.959502 1717 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:54:05.965248 kubelet[1717]: I0813 00:54:05.965216 1717 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:54:05.966384 kubelet[1717]: I0813 00:54:05.966349 1717 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:54:05.966566 kubelet[1717]: I0813 00:54:05.966514 1717 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:54:05.966766 kubelet[1717]: I0813 00:54:05.966557 1717 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:54:05.966914 kubelet[1717]: I0813 00:54:05.966776 1717 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:54:05.966914 kubelet[1717]: I0813 00:54:05.966786 1717 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:54:05.966981 kubelet[1717]: I0813 00:54:05.966948 1717 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:05.972511 kubelet[1717]: I0813 00:54:05.972467 1717 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:54:05.972511 kubelet[1717]: I0813 00:54:05.972508 1717 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:54:05.972602 kubelet[1717]: I0813 00:54:05.972551 1717 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:54:05.972602 kubelet[1717]: I0813 00:54:05.972582 1717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:54:05.990174 kubelet[1717]: I0813 00:54:05.990129 1717 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:54:05.990634 kubelet[1717]: I0813 00:54:05.990608 1717 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:54:05.990686 kubelet[1717]: W0813 00:54:05.990673 1717 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:54:05.997010 kubelet[1717]: W0813 00:54:05.996938 1717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Aug 13 00:54:05.997092 kubelet[1717]: E0813 00:54:05.997008 1717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:05.999459 kubelet[1717]: I0813 00:54:05.999435 1717 server.go:1274] "Started kubelet" Aug 13 00:54:05.999878 kubelet[1717]: I0813 00:54:05.999839 1717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:54:06.000272 kubelet[1717]: I0813 00:54:06.000213 1717 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:54:06.000325 kubelet[1717]: I0813 00:54:06.000307 1717 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:54:06.001577 kubelet[1717]: I0813 00:54:06.001544 1717 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:54:06.003217 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 00:54:06.003392 kubelet[1717]: I0813 00:54:06.003365 1717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:54:06.003588 kubelet[1717]: W0813 00:54:06.003532 1717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Aug 13 00:54:06.003709 kubelet[1717]: E0813 00:54:06.003680 1717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:06.004028 kubelet[1717]: I0813 00:54:06.004008 1717 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:54:06.006398 kubelet[1717]: I0813 00:54:06.006379 1717 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:54:06.006673 kubelet[1717]: I0813 00:54:06.006657 1717 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:54:06.006864 kubelet[1717]: I0813 00:54:06.006814 1717 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:54:06.007261 kubelet[1717]: E0813 00:54:06.007230 1717 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:54:06.007393 kubelet[1717]: W0813 00:54:06.007351 1717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Aug 13 00:54:06.007528 kubelet[1717]: E0813 00:54:06.007491 1717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:06.007973 kubelet[1717]: E0813 00:54:06.007937 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:06.008135 kubelet[1717]: E0813 00:54:06.008009 1717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Aug 13 00:54:06.009170 kubelet[1717]: I0813 00:54:06.009137 1717 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:54:06.009170 kubelet[1717]: I0813 00:54:06.009156 1717 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:54:06.009263 kubelet[1717]: I0813 00:54:06.009246 1717 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:54:06.010659 kubelet[1717]: E0813 00:54:06.009560 1717 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2d71aa1b2eef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:54:05.999402735 +0000 UTC m=+0.429524001,LastTimestamp:2025-08-13 00:54:05.999402735 +0000 UTC m=+0.429524001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:54:06.033754 kubelet[1717]: I0813 00:54:06.033720 1717 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:54:06.033754 kubelet[1717]: I0813 00:54:06.033744 1717 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:54:06.033922 kubelet[1717]: I0813 00:54:06.033767 1717 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:06.052857 kubelet[1717]: I0813 00:54:06.052772 1717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:54:06.054457 kubelet[1717]: I0813 00:54:06.054434 1717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:54:06.054541 kubelet[1717]: I0813 00:54:06.054486 1717 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:54:06.054669 kubelet[1717]: I0813 00:54:06.054642 1717 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:54:06.054715 kubelet[1717]: E0813 00:54:06.054702 1717 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:54:06.055189 kubelet[1717]: W0813 00:54:06.055153 1717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Aug 13 00:54:06.055258 kubelet[1717]: E0813 00:54:06.055205 1717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:06.108599 kubelet[1717]: E0813 00:54:06.108514 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:06.155933 kubelet[1717]: E0813 00:54:06.155819 1717 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:54:06.208794 kubelet[1717]: E0813 00:54:06.208653 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:06.208794 kubelet[1717]: E0813 00:54:06.208687 1717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Aug 13 00:54:06.309270 kubelet[1717]: E0813 00:54:06.309173 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:06.356494 kubelet[1717]: E0813 00:54:06.356390 1717 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:54:06.409975 kubelet[1717]: E0813 00:54:06.409907 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:06.511062 kubelet[1717]: E0813 00:54:06.510914 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:06.604174 kubelet[1717]: I0813 00:54:06.604137 1717 policy_none.go:49] "None policy: Start" Aug 13 00:54:06.605001 kubelet[1717]: I0813 00:54:06.604979 1717 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:54:06.605078 kubelet[1717]: I0813 00:54:06.605041 1717 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:54:06.609271 kubelet[1717]: E0813 00:54:06.609228 1717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Aug 13 00:54:06.610021 kubelet[1717]: I0813 00:54:06.609992 1717 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:54:06.610177 kubelet[1717]: I0813 00:54:06.610152 1717 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:54:06.610239 kubelet[1717]: I0813 00:54:06.610179 1717 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:54:06.610681 kubelet[1717]: I0813 00:54:06.610657 1717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:54:06.611374 kubelet[1717]: E0813 00:54:06.611357 1717 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:54:06.712200 kubelet[1717]: I0813 00:54:06.712166 1717 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:54:06.712646 kubelet[1717]: E0813 00:54:06.712555 1717 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Aug 13 00:54:06.812425 kubelet[1717]: I0813 00:54:06.812002 1717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f4140d73dbbf7a0bb72a0d1599d5513-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f4140d73dbbf7a0bb72a0d1599d5513\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:06.812425 kubelet[1717]: I0813 00:54:06.812056 1717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f4140d73dbbf7a0bb72a0d1599d5513-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f4140d73dbbf7a0bb72a0d1599d5513\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:06.812425 kubelet[1717]: I0813 00:54:06.812105 1717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:06.812425 kubelet[1717]: I0813 00:54:06.812135 1717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:06.812425 kubelet[1717]: I0813 00:54:06.812172 1717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f4140d73dbbf7a0bb72a0d1599d5513-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7f4140d73dbbf7a0bb72a0d1599d5513\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:06.812623 kubelet[1717]: I0813 00:54:06.812202 1717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:06.812623 kubelet[1717]: I0813 00:54:06.812246 1717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:06.812623 kubelet[1717]: I0813 00:54:06.812274 1717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:06.812623 kubelet[1717]: I0813 00:54:06.812297 1717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:06.913892 kubelet[1717]: I0813 00:54:06.913858 1717 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:54:06.914236 kubelet[1717]: E0813 00:54:06.914199 1717 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Aug 13 00:54:06.987383 kubelet[1717]: W0813 00:54:06.987283 1717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Aug 13 00:54:06.987383 kubelet[1717]: E0813 00:54:06.987368 1717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:07.035201 kubelet[1717]: W0813 00:54:07.035140 1717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Aug 13 00:54:07.035201 kubelet[1717]: E0813 00:54:07.035189 1717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:07.065412 kubelet[1717]: E0813 00:54:07.064687 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:07.065412 kubelet[1717]: E0813 00:54:07.064687 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:07.065544 env[1311]: time="2025-08-13T00:54:07.065360551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:07.065544 env[1311]: time="2025-08-13T00:54:07.065454001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7f4140d73dbbf7a0bb72a0d1599d5513,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:07.065964 env[1311]: time="2025-08-13T00:54:07.065949113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:07.066006 kubelet[1717]: E0813 00:54:07.065685 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:07.316807 kubelet[1717]: I0813 00:54:07.316667 1717 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:54:07.317323 kubelet[1717]: E0813 00:54:07.317120 1717 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Aug 13 00:54:07.410252 kubelet[1717]: E0813 00:54:07.409985 1717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" Aug 13 00:54:07.528992 kubelet[1717]: W0813 00:54:07.528883 1717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Aug 13 00:54:07.528992 kubelet[1717]: E0813 00:54:07.528976 1717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:07.583009 kubelet[1717]: W0813 00:54:07.582807 1717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Aug 13 00:54:07.583009 kubelet[1717]: E0813 00:54:07.582912 1717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:08.119217 kubelet[1717]: I0813 00:54:08.119184 1717 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:54:08.119646 kubelet[1717]: E0813 00:54:08.119600 1717 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Aug 13 00:54:08.138598 kubelet[1717]: E0813 00:54:08.138532 1717 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:08.616546 kubelet[1717]: E0813 00:54:08.616403 1717 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2d71aa1b2eef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:54:05.999402735 +0000 UTC m=+0.429524001,LastTimestamp:2025-08-13 00:54:05.999402735 +0000 UTC m=+0.429524001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:54:08.668171 kubelet[1717]: W0813 00:54:08.668118 1717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Aug 13 00:54:08.668253 kubelet[1717]: E0813 00:54:08.668174 1717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:09.011416 kubelet[1717]: E0813 00:54:09.011288 1717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="3.2s" Aug 13 00:54:09.248312 kubelet[1717]: W0813 00:54:09.248226 1717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Aug 13 00:54:09.248312 kubelet[1717]: E0813 00:54:09.248298 1717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:09.721691 kubelet[1717]: I0813 00:54:09.721653 1717 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:54:09.722144 kubelet[1717]: E0813 00:54:09.722086 1717 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Aug 13 00:54:09.806954 kubelet[1717]: W0813 00:54:09.806909 1717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Aug 13 00:54:09.807028 kubelet[1717]: E0813 00:54:09.806962 1717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:10.133148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427954202.mount: Deactivated successfully. Aug 13 00:54:10.136572 env[1311]: time="2025-08-13T00:54:10.136523054Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.140588 env[1311]: time="2025-08-13T00:54:10.140549492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.142764 env[1311]: time="2025-08-13T00:54:10.142712094Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.143620 env[1311]: time="2025-08-13T00:54:10.143596782Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.144888 env[1311]: time="2025-08-13T00:54:10.144860200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.146783 env[1311]: time="2025-08-13T00:54:10.146759324Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.148068 env[1311]: time="2025-08-13T00:54:10.148040396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.149289 env[1311]: time="2025-08-13T00:54:10.149248529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.150414 env[1311]: time="2025-08-13T00:54:10.150378329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.151681 env[1311]: time="2025-08-13T00:54:10.151659923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.152919 env[1311]: time="2025-08-13T00:54:10.152888316Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.157578 env[1311]: time="2025-08-13T00:54:10.157512677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:10.517270 kubelet[1717]: W0813 00:54:10.517136 1717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Aug 13 00:54:10.517270 kubelet[1717]: E0813 00:54:10.517183 1717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:10.717552 env[1311]: time="2025-08-13T00:54:10.717441879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:10.717552 env[1311]: time="2025-08-13T00:54:10.717511601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:10.717809 env[1311]: time="2025-08-13T00:54:10.717522548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:10.717809 env[1311]: time="2025-08-13T00:54:10.717699283Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f432328053f034d6cf33f4d17f3b14ca03a9bb7809a23b8f3d9d98cfabea59a pid=1772 runtime=io.containerd.runc.v2 Aug 13 00:54:10.721021 env[1311]: time="2025-08-13T00:54:10.720939977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:10.721181 env[1311]: time="2025-08-13T00:54:10.721154315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:10.721323 env[1311]: time="2025-08-13T00:54:10.721299332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:10.723780 env[1311]: time="2025-08-13T00:54:10.723412954Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/847f9fc29e269e7dd09000904741ec072d32ee6a6b431ab1972a922d0c4878bf pid=1775 runtime=io.containerd.runc.v2 Aug 13 00:54:10.723916 env[1311]: time="2025-08-13T00:54:10.723250563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:10.723916 env[1311]: time="2025-08-13T00:54:10.723290783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:10.723916 env[1311]: time="2025-08-13T00:54:10.723300306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:10.723916 env[1311]: time="2025-08-13T00:54:10.723432772Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a1bc524a03637e29c305be1ed572db15759dfb010f4ecbc81b15ffcb2f2724b pid=1781 runtime=io.containerd.runc.v2 Aug 13 00:54:10.951200 env[1311]: time="2025-08-13T00:54:10.951162069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7f4140d73dbbf7a0bb72a0d1599d5513,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f432328053f034d6cf33f4d17f3b14ca03a9bb7809a23b8f3d9d98cfabea59a\"" Aug 13 00:54:10.952513 kubelet[1717]: E0813 00:54:10.952479 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:10.957797 env[1311]: time="2025-08-13T00:54:10.957759204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"847f9fc29e269e7dd09000904741ec072d32ee6a6b431ab1972a922d0c4878bf\"" Aug 13 00:54:10.962435 kubelet[1717]: E0813 00:54:10.962392 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:10.962930 env[1311]: time="2025-08-13T00:54:10.962885409Z" level=info msg="CreateContainer within sandbox \"9f432328053f034d6cf33f4d17f3b14ca03a9bb7809a23b8f3d9d98cfabea59a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:54:10.963612 env[1311]: time="2025-08-13T00:54:10.962932836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a1bc524a03637e29c305be1ed572db15759dfb010f4ecbc81b15ffcb2f2724b\"" Aug 13 00:54:10.964714 kubelet[1717]: E0813 00:54:10.964688 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:10.964786 env[1311]: time="2025-08-13T00:54:10.964719182Z" level=info msg="CreateContainer within sandbox \"847f9fc29e269e7dd09000904741ec072d32ee6a6b431ab1972a922d0c4878bf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:54:10.966524 env[1311]: time="2025-08-13T00:54:10.966497327Z" level=info msg="CreateContainer within sandbox \"8a1bc524a03637e29c305be1ed572db15759dfb010f4ecbc81b15ffcb2f2724b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:54:11.001712 env[1311]: time="2025-08-13T00:54:11.001634004Z" level=info msg="CreateContainer within sandbox \"9f432328053f034d6cf33f4d17f3b14ca03a9bb7809a23b8f3d9d98cfabea59a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9dac807c8a94ed9190b408e3a4c18304191aa0aa5f2e3ef800172d4feee2fc04\"" Aug 13 00:54:11.002387 env[1311]: time="2025-08-13T00:54:11.002359172Z" level=info msg="StartContainer for \"9dac807c8a94ed9190b408e3a4c18304191aa0aa5f2e3ef800172d4feee2fc04\"" Aug 13 00:54:11.008534 env[1311]: time="2025-08-13T00:54:11.008471221Z" level=info msg="CreateContainer within sandbox \"8a1bc524a03637e29c305be1ed572db15759dfb010f4ecbc81b15ffcb2f2724b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6fb251ba70014650b7eadeb6ce4131ba27856a12c3f3782fd21d788e94e6f746\"" Aug 13 00:54:11.009313 env[1311]: time="2025-08-13T00:54:11.009281621Z" level=info msg="StartContainer for \"6fb251ba70014650b7eadeb6ce4131ba27856a12c3f3782fd21d788e94e6f746\"" Aug 13 00:54:11.009934 env[1311]: time="2025-08-13T00:54:11.009896384Z" level=info msg="CreateContainer within sandbox \"847f9fc29e269e7dd09000904741ec072d32ee6a6b431ab1972a922d0c4878bf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0cf1414e3016711e509db2b4222bf36fb27974c93564622e280c580cd222cf48\"" Aug 13 00:54:11.010279 env[1311]: time="2025-08-13T00:54:11.010248229Z" level=info msg="StartContainer for \"0cf1414e3016711e509db2b4222bf36fb27974c93564622e280c580cd222cf48\"" Aug 13 00:54:11.182365 env[1311]: time="2025-08-13T00:54:11.182271014Z" level=info msg="StartContainer for \"9dac807c8a94ed9190b408e3a4c18304191aa0aa5f2e3ef800172d4feee2fc04\" returns successfully" Aug 13 00:54:11.193436 env[1311]: time="2025-08-13T00:54:11.193379945Z" level=info msg="StartContainer for \"6fb251ba70014650b7eadeb6ce4131ba27856a12c3f3782fd21d788e94e6f746\" returns successfully" Aug 13 00:54:11.197847 env[1311]: time="2025-08-13T00:54:11.195841936Z" level=info msg="StartContainer for \"0cf1414e3016711e509db2b4222bf36fb27974c93564622e280c580cd222cf48\" returns successfully" Aug 13 00:54:12.074421 kubelet[1717]: E0813 00:54:12.074370 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:12.076693 kubelet[1717]: E0813 00:54:12.076662 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:12.078593 kubelet[1717]: E0813 00:54:12.078563 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:12.884078 kubelet[1717]: E0813 00:54:12.884042 1717 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:54:12.924331 kubelet[1717]: I0813 00:54:12.924291 1717 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:54:13.032714 kubelet[1717]: I0813 00:54:13.032659 1717 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:54:13.032714 kubelet[1717]: E0813 00:54:13.032711 1717 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 00:54:13.041835 kubelet[1717]: E0813 00:54:13.041777 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:13.081039 kubelet[1717]: E0813 00:54:13.080997 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:13.081523 kubelet[1717]: E0813 00:54:13.081240 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:13.081815 kubelet[1717]: E0813 00:54:13.081784 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:13.142919 kubelet[1717]: E0813 00:54:13.142597 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:13.243346 kubelet[1717]: E0813 00:54:13.243282 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:13.344040 kubelet[1717]: E0813 00:54:13.343985 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:13.444718 kubelet[1717]: E0813 00:54:13.444581 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:13.546008 kubelet[1717]: E0813 00:54:13.545750 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:13.646975 kubelet[1717]: E0813 00:54:13.646929 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:13.747942 kubelet[1717]: E0813 00:54:13.747786 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:13.848553 kubelet[1717]: E0813 00:54:13.848512 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:13.949143 kubelet[1717]: E0813 00:54:13.949076 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:14.049529 kubelet[1717]: E0813 00:54:14.049373 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:14.083095 kubelet[1717]: E0813 00:54:14.083040 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:14.150115 kubelet[1717]: E0813 00:54:14.150054 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:14.250746 kubelet[1717]: E0813 00:54:14.250703 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:14.351386 kubelet[1717]: E0813 00:54:14.351336 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:14.452441 kubelet[1717]: E0813 00:54:14.452401 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:14.552953 kubelet[1717]: E0813 00:54:14.552883 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:14.653758 kubelet[1717]: E0813 00:54:14.653628 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:14.754320 kubelet[1717]: E0813 00:54:14.754267 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:14.855087 kubelet[1717]: E0813 00:54:14.855020 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:14.862688 kubelet[1717]: E0813 00:54:14.862654 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:14.956225 kubelet[1717]: E0813 00:54:14.956069 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:15.007762 systemd[1]: Reloading. Aug 13 00:54:15.064780 kubelet[1717]: E0813 00:54:15.056380 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:15.068864 /usr/lib/systemd/system-generators/torcx-generator[2017]: time="2025-08-13T00:54:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:15.069163 /usr/lib/systemd/system-generators/torcx-generator[2017]: time="2025-08-13T00:54:15Z" level=info msg="torcx already run" Aug 13 00:54:15.084604 kubelet[1717]: E0813 00:54:15.084574 1717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:15.145787 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:15.145813 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:15.156812 kubelet[1717]: E0813 00:54:15.156760 1717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:15.170398 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:15.246154 systemd[1]: Stopping kubelet.service... Aug 13 00:54:15.269309 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:54:15.269744 systemd[1]: Stopped kubelet.service. Aug 13 00:54:15.272072 systemd[1]: Starting kubelet.service... Aug 13 00:54:15.368222 systemd[1]: Started kubelet.service. Aug 13 00:54:15.408756 kubelet[2074]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:15.408756 kubelet[2074]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:54:15.408756 kubelet[2074]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:15.409203 kubelet[2074]: I0813 00:54:15.408858 2074 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:54:15.415793 kubelet[2074]: I0813 00:54:15.415745 2074 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:54:15.415793 kubelet[2074]: I0813 00:54:15.415782 2074 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:54:15.416082 kubelet[2074]: I0813 00:54:15.416056 2074 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:54:15.418111 kubelet[2074]: I0813 00:54:15.417249 2074 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:54:15.419007 kubelet[2074]: I0813 00:54:15.418981 2074 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:54:15.423846 kubelet[2074]: E0813 00:54:15.423793 2074 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:54:15.423911 kubelet[2074]: I0813 00:54:15.423850 2074 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:54:15.428100 kubelet[2074]: I0813 00:54:15.428084 2074 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:54:15.428464 kubelet[2074]: I0813 00:54:15.428452 2074 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:54:15.428602 kubelet[2074]: I0813 00:54:15.428565 2074 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:54:15.428753 kubelet[2074]: I0813 00:54:15.428595 2074 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:54:15.428871 kubelet[2074]: I0813 00:54:15.428759 2074 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:54:15.428871 kubelet[2074]: I0813 00:54:15.428775 2074 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:54:15.428871 kubelet[2074]: I0813 00:54:15.428803 2074 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:15.428945 kubelet[2074]: I0813 00:54:15.428897 2074 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:54:15.428945 kubelet[2074]: I0813 00:54:15.428907 2074 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:54:15.428945 kubelet[2074]: I0813 00:54:15.428931 2074 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:54:15.428945 kubelet[2074]: I0813 00:54:15.428940 2074 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:54:15.429745 kubelet[2074]: I0813 00:54:15.429731 2074 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:54:15.433155 kubelet[2074]: I0813 00:54:15.430090 2074 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:54:15.433155 kubelet[2074]: I0813 00:54:15.430466 2074 server.go:1274] "Started kubelet" Aug 13 00:54:15.433155 kubelet[2074]: I0813 00:54:15.430986 2074 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:54:15.433155 kubelet[2074]: I0813 00:54:15.431412 2074 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:54:15.433155 kubelet[2074]: I0813 00:54:15.431508 2074 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:54:15.433155 kubelet[2074]: I0813 00:54:15.431663 2074 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:54:15.433342 kubelet[2074]: I0813 00:54:15.433231 2074 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:54:15.435308 kubelet[2074]: I0813 00:54:15.435285 2074 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:54:15.437448 kubelet[2074]: I0813 00:54:15.437411 2074 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:54:15.437720 kubelet[2074]: E0813 00:54:15.437694 2074 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:15.439026 kubelet[2074]: I0813 00:54:15.438992 2074 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:54:15.441173 kubelet[2074]: I0813 00:54:15.441158 2074 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:54:15.441256 kubelet[2074]: I0813 00:54:15.441241 2074 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:54:15.441736 kubelet[2074]: I0813 00:54:15.441673 2074 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:54:15.441848 kubelet[2074]: I0813 00:54:15.441805 2074 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:54:15.448169 kubelet[2074]: I0813 00:54:15.448137 2074 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:54:15.448992 kubelet[2074]: I0813 00:54:15.448975 2074 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:54:15.448992 kubelet[2074]: I0813 00:54:15.448993 2074 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:54:15.449075 kubelet[2074]: I0813 00:54:15.449007 2074 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:54:15.449075 kubelet[2074]: E0813 00:54:15.449042 2074 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:54:15.484169 kubelet[2074]: I0813 00:54:15.484140 2074 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:54:15.484169 kubelet[2074]: I0813 00:54:15.484157 2074 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:54:15.484169 kubelet[2074]: I0813 00:54:15.484177 2074 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:15.484373 kubelet[2074]: I0813 00:54:15.484312 2074 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:54:15.484373 kubelet[2074]: I0813 00:54:15.484324 2074 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:54:15.484373 kubelet[2074]: I0813 00:54:15.484348 2074 policy_none.go:49] "None policy: Start" Aug 13 00:54:15.484890 kubelet[2074]: I0813 00:54:15.484871 2074 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:54:15.484937 kubelet[2074]: I0813 00:54:15.484894 2074 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:54:15.485035 kubelet[2074]: I0813 00:54:15.485019 2074 state_mem.go:75] "Updated machine memory state" Aug 13 00:54:15.486476 kubelet[2074]: I0813 00:54:15.486448 2074 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:54:15.486633 kubelet[2074]: I0813 00:54:15.486611 2074 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:54:15.486691 kubelet[2074]: I0813 00:54:15.486629 2074 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:54:15.487508 kubelet[2074]: I0813 00:54:15.487490 2074 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:54:15.591118 kubelet[2074]: I0813 00:54:15.590977 2074 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:54:15.643630 kubelet[2074]: I0813 00:54:15.643549 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:15.643630 kubelet[2074]: I0813 00:54:15.643615 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:15.643630 kubelet[2074]: I0813 00:54:15.643634 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:15.643630 kubelet[2074]: I0813 00:54:15.643650 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f4140d73dbbf7a0bb72a0d1599d5513-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7f4140d73dbbf7a0bb72a0d1599d5513\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:15.643971 kubelet[2074]: I0813 00:54:15.643668 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:15.643971 kubelet[2074]: I0813 00:54:15.643738 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:15.643971 kubelet[2074]: I0813 00:54:15.643759 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:15.643971 kubelet[2074]: I0813 00:54:15.643773 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f4140d73dbbf7a0bb72a0d1599d5513-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f4140d73dbbf7a0bb72a0d1599d5513\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:15.643971 kubelet[2074]: I0813 00:54:15.643787 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f4140d73dbbf7a0bb72a0d1599d5513-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7f4140d73dbbf7a0bb72a0d1599d5513\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:15.869438 kubelet[2074]: I0813 00:54:15.869245 2074 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 00:54:15.869438 kubelet[2074]: I0813 00:54:15.869386 2074 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:54:15.870290 kubelet[2074]: E0813 00:54:15.870265 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:15.870630 kubelet[2074]: E0813 00:54:15.870561 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:15.870630 kubelet[2074]: E0813 00:54:15.870631 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:16.006330 sudo[2110]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:54:16.006547 sudo[2110]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 00:54:16.429985 kubelet[2074]: I0813 00:54:16.429925 2074 apiserver.go:52] "Watching apiserver" Aug 13 00:54:16.442595 kubelet[2074]: I0813 00:54:16.442568 2074 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:54:16.464502 kubelet[2074]: E0813 00:54:16.464476 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:16.466304 kubelet[2074]: E0813 00:54:16.465173 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:16.620417 sudo[2110]: pam_unix(sudo:session): session closed for user root Aug 13 00:54:16.805926 kubelet[2074]: E0813 00:54:16.805740 2074 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:16.805926 kubelet[2074]: E0813 00:54:16.805896 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:17.031768 update_engine[1297]: I0813 00:54:17.031700 1297 update_attempter.cc:509] Updating boot flags... Aug 13 00:54:17.265045 kubelet[2074]: I0813 00:54:17.263116 2074 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.263087623 podStartE2EDuration="2.263087623s" podCreationTimestamp="2025-08-13 00:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:17.263049696 +0000 UTC m=+1.890516733" watchObservedRunningTime="2025-08-13 00:54:17.263087623 +0000 UTC m=+1.890554660" Aug 13 00:54:17.265045 kubelet[2074]: I0813 00:54:17.264586 2074 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.2632092 podStartE2EDuration="2.2632092s" podCreationTimestamp="2025-08-13 00:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:16.806064714 +0000 UTC m=+1.433531772" watchObservedRunningTime="2025-08-13 00:54:17.2632092 +0000 UTC m=+1.890676237" Aug 13 00:54:17.365192 kubelet[2074]: I0813 00:54:17.354391 2074 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.354373301 podStartE2EDuration="2.354373301s" podCreationTimestamp="2025-08-13 00:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:17.317399586 +0000 UTC m=+1.944866623" watchObservedRunningTime="2025-08-13 00:54:17.354373301 +0000 UTC m=+1.981840338" Aug 13 00:54:17.466146 kubelet[2074]: E0813 00:54:17.466093 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:17.466690 kubelet[2074]: E0813 00:54:17.466669 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:20.548199 sudo[1432]: pam_unix(sudo:session): session closed for user root Aug 13 00:54:20.549996 sshd[1426]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:20.553082 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:54406.service: Deactivated successfully. Aug 13 00:54:20.554753 systemd-logind[1294]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:54:20.554796 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:54:20.555798 systemd-logind[1294]: Removed session 5. Aug 13 00:54:20.923126 kubelet[2074]: E0813 00:54:20.923072 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:21.952860 kubelet[2074]: I0813 00:54:21.952800 2074 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:54:21.953374 kubelet[2074]: I0813 00:54:21.953364 2074 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:54:21.953421 env[1311]: time="2025-08-13T00:54:21.953167730Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:54:22.690496 kubelet[2074]: I0813 00:54:22.690439 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a687261-ace8-4aff-b774-2ae90a3b5186-kube-proxy\") pod \"kube-proxy-mt77d\" (UID: \"6a687261-ace8-4aff-b774-2ae90a3b5186\") " pod="kube-system/kube-proxy-mt77d" Aug 13 00:54:22.690496 kubelet[2074]: I0813 00:54:22.690475 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-cilium-run\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.690496 kubelet[2074]: I0813 00:54:22.690497 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-host-proc-sys-net\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.690800 kubelet[2074]: I0813 00:54:22.690516 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-hostproc\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.690800 kubelet[2074]: I0813 00:54:22.690534 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9462f70-04f9-4661-9383-b6a88e53cc2e-clustermesh-secrets\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.690800 kubelet[2074]: I0813 00:54:22.690549 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-bpf-maps\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.690800 kubelet[2074]: I0813 00:54:22.690563 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-etc-cni-netd\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.690800 kubelet[2074]: I0813 00:54:22.690576 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-xtables-lock\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.690800 kubelet[2074]: I0813 00:54:22.690591 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a687261-ace8-4aff-b774-2ae90a3b5186-xtables-lock\") pod \"kube-proxy-mt77d\" (UID: \"6a687261-ace8-4aff-b774-2ae90a3b5186\") " pod="kube-system/kube-proxy-mt77d" Aug 13 00:54:22.691029 kubelet[2074]: I0813 00:54:22.690605 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-cilium-cgroup\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.691029 kubelet[2074]: I0813 00:54:22.690620 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-cni-path\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.691029 kubelet[2074]: I0813 00:54:22.690632 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9462f70-04f9-4661-9383-b6a88e53cc2e-cilium-config-path\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.691029 kubelet[2074]: I0813 00:54:22.690644 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpgnh\" (UniqueName: \"kubernetes.io/projected/d9462f70-04f9-4661-9383-b6a88e53cc2e-kube-api-access-jpgnh\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.691029 kubelet[2074]: I0813 00:54:22.690657 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-lib-modules\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.691029 kubelet[2074]: I0813 00:54:22.690669 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a687261-ace8-4aff-b774-2ae90a3b5186-lib-modules\") pod \"kube-proxy-mt77d\" (UID: \"6a687261-ace8-4aff-b774-2ae90a3b5186\") " pod="kube-system/kube-proxy-mt77d" Aug 13 00:54:22.691220 kubelet[2074]: I0813 00:54:22.690681 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9462f70-04f9-4661-9383-b6a88e53cc2e-hubble-tls\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.691220 kubelet[2074]: I0813 00:54:22.690694 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pcnk\" (UniqueName: \"kubernetes.io/projected/6a687261-ace8-4aff-b774-2ae90a3b5186-kube-api-access-5pcnk\") pod \"kube-proxy-mt77d\" (UID: \"6a687261-ace8-4aff-b774-2ae90a3b5186\") " pod="kube-system/kube-proxy-mt77d" Aug 13 00:54:22.691220 kubelet[2074]: I0813 00:54:22.690709 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-host-proc-sys-kernel\") pod \"cilium-25552\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " pod="kube-system/cilium-25552" Aug 13 00:54:22.792288 kubelet[2074]: I0813 00:54:22.792221 2074 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:54:22.987703 kubelet[2074]: E0813 00:54:22.987537 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:22.988149 env[1311]: time="2025-08-13T00:54:22.988071497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mt77d,Uid:6a687261-ace8-4aff-b774-2ae90a3b5186,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:22.995042 kubelet[2074]: E0813 00:54:22.995006 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:22.995391 env[1311]: time="2025-08-13T00:54:22.995361544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25552,Uid:d9462f70-04f9-4661-9383-b6a88e53cc2e,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:23.495044 env[1311]: time="2025-08-13T00:54:23.494806188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:23.495044 env[1311]: time="2025-08-13T00:54:23.494871460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:23.495044 env[1311]: time="2025-08-13T00:54:23.494893238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:23.495268 env[1311]: time="2025-08-13T00:54:23.495207602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccc73fa76a4286373416ffd734d2b9eb0934222252289959f83bde27660f8327 pid=2180 runtime=io.containerd.runc.v2 Aug 13 00:54:23.498492 kubelet[2074]: I0813 00:54:23.498330 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-442lw\" (UniqueName: \"kubernetes.io/projected/3bbf3df6-7bf5-434a-9666-adb38c73ef5b-kube-api-access-442lw\") pod \"cilium-operator-5d85765b45-fdssn\" (UID: \"3bbf3df6-7bf5-434a-9666-adb38c73ef5b\") " pod="kube-system/cilium-operator-5d85765b45-fdssn" Aug 13 00:54:23.498724 kubelet[2074]: I0813 00:54:23.498552 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bbf3df6-7bf5-434a-9666-adb38c73ef5b-cilium-config-path\") pod \"cilium-operator-5d85765b45-fdssn\" (UID: \"3bbf3df6-7bf5-434a-9666-adb38c73ef5b\") " pod="kube-system/cilium-operator-5d85765b45-fdssn" Aug 13 00:54:23.498807 env[1311]: time="2025-08-13T00:54:23.498687849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:23.499010 env[1311]: time="2025-08-13T00:54:23.498820668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:23.499010 env[1311]: time="2025-08-13T00:54:23.498855975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:23.499117 env[1311]: time="2025-08-13T00:54:23.499047492Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad pid=2192 runtime=io.containerd.runc.v2 Aug 13 00:54:23.539048 env[1311]: time="2025-08-13T00:54:23.539001221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25552,Uid:d9462f70-04f9-4661-9383-b6a88e53cc2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\"" Aug 13 00:54:23.539231 env[1311]: time="2025-08-13T00:54:23.539001481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mt77d,Uid:6a687261-ace8-4aff-b774-2ae90a3b5186,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccc73fa76a4286373416ffd734d2b9eb0934222252289959f83bde27660f8327\"" Aug 13 00:54:23.539609 kubelet[2074]: E0813 00:54:23.539570 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:23.540269 kubelet[2074]: E0813 00:54:23.540249 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:23.542469 env[1311]: time="2025-08-13T00:54:23.541618829Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:54:23.543003 env[1311]: time="2025-08-13T00:54:23.542977668Z" level=info msg="CreateContainer within sandbox \"ccc73fa76a4286373416ffd734d2b9eb0934222252289959f83bde27660f8327\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:54:23.561215 env[1311]: time="2025-08-13T00:54:23.561145714Z" level=info msg="CreateContainer within sandbox \"ccc73fa76a4286373416ffd734d2b9eb0934222252289959f83bde27660f8327\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ad00a2e03acb75039d22c365d2d05fa269c23a0e36f2b62656ccdeb89839b82a\"" Aug 13 00:54:23.562290 env[1311]: time="2025-08-13T00:54:23.561722571Z" level=info msg="StartContainer for \"ad00a2e03acb75039d22c365d2d05fa269c23a0e36f2b62656ccdeb89839b82a\"" Aug 13 00:54:23.614232 env[1311]: time="2025-08-13T00:54:23.614159517Z" level=info msg="StartContainer for \"ad00a2e03acb75039d22c365d2d05fa269c23a0e36f2b62656ccdeb89839b82a\" returns successfully" Aug 13 00:54:23.775597 kubelet[2074]: E0813 00:54:23.775439 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:23.776135 env[1311]: time="2025-08-13T00:54:23.776092782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fdssn,Uid:3bbf3df6-7bf5-434a-9666-adb38c73ef5b,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:23.803848 env[1311]: time="2025-08-13T00:54:23.796645080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:23.803848 env[1311]: time="2025-08-13T00:54:23.796693876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:23.803848 env[1311]: time="2025-08-13T00:54:23.796703928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:23.803848 env[1311]: time="2025-08-13T00:54:23.796847220Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3 pid=2336 runtime=io.containerd.runc.v2 Aug 13 00:54:23.846851 env[1311]: time="2025-08-13T00:54:23.846777309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fdssn,Uid:3bbf3df6-7bf5-434a-9666-adb38c73ef5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3\"" Aug 13 00:54:23.847442 kubelet[2074]: E0813 00:54:23.847420 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:24.335866 kubelet[2074]: E0813 00:54:24.335802 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:24.482190 kubelet[2074]: E0813 00:54:24.482128 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:24.483222 kubelet[2074]: E0813 00:54:24.483182 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:24.502423 kubelet[2074]: I0813 00:54:24.502050 2074 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mt77d" podStartSLOduration=2.5020240879999998 podStartE2EDuration="2.502024088s" podCreationTimestamp="2025-08-13 00:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:24.501057807 +0000 UTC m=+9.128524844" watchObservedRunningTime="2025-08-13 00:54:24.502024088 +0000 UTC m=+9.129491126" Aug 13 00:54:27.099387 kubelet[2074]: E0813 00:54:27.099352 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:28.302401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2024301553.mount: Deactivated successfully. Aug 13 00:54:30.929127 kubelet[2074]: E0813 00:54:30.929095 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:33.402376 env[1311]: time="2025-08-13T00:54:33.402294316Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:33.842936 env[1311]: time="2025-08-13T00:54:33.842734701Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:33.844604 env[1311]: time="2025-08-13T00:54:33.844574728Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:33.845052 env[1311]: time="2025-08-13T00:54:33.845021765Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:54:33.846369 env[1311]: time="2025-08-13T00:54:33.846263020Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:54:33.847287 env[1311]: time="2025-08-13T00:54:33.847245389Z" level=info msg="CreateContainer within sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:54:33.861658 env[1311]: time="2025-08-13T00:54:33.861615510Z" level=info msg="CreateContainer within sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c\"" Aug 13 00:54:33.862195 env[1311]: time="2025-08-13T00:54:33.862165220Z" level=info msg="StartContainer for \"3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c\"" Aug 13 00:54:33.906445 env[1311]: time="2025-08-13T00:54:33.906387271Z" level=info msg="StartContainer for \"3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c\" returns successfully" Aug 13 00:54:34.132047 env[1311]: time="2025-08-13T00:54:34.131999461Z" level=info msg="shim disconnected" id=3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c Aug 13 00:54:34.132047 env[1311]: time="2025-08-13T00:54:34.132047741Z" level=warning msg="cleaning up after shim disconnected" id=3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c namespace=k8s.io Aug 13 00:54:34.132312 env[1311]: time="2025-08-13T00:54:34.132059655Z" level=info msg="cleaning up dead shim" Aug 13 00:54:34.138865 env[1311]: time="2025-08-13T00:54:34.138786954Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2513 runtime=io.containerd.runc.v2\n" Aug 13 00:54:34.503162 kubelet[2074]: E0813 00:54:34.502709 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:34.504793 env[1311]: time="2025-08-13T00:54:34.504745792Z" level=info msg="CreateContainer within sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:54:34.520368 env[1311]: time="2025-08-13T00:54:34.519695340Z" level=info msg="CreateContainer within sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da\"" Aug 13 00:54:34.521017 env[1311]: time="2025-08-13T00:54:34.520984384Z" level=info msg="StartContainer for \"ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da\"" Aug 13 00:54:34.563678 env[1311]: time="2025-08-13T00:54:34.562363966Z" level=info msg="StartContainer for \"ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da\" returns successfully" Aug 13 00:54:34.572160 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:54:34.572422 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:54:34.572986 systemd[1]: Stopping systemd-sysctl.service... Aug 13 00:54:34.574881 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:54:34.586037 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:54:34.595997 env[1311]: time="2025-08-13T00:54:34.595942529Z" level=info msg="shim disconnected" id=ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da Aug 13 00:54:34.596180 env[1311]: time="2025-08-13T00:54:34.595997181Z" level=warning msg="cleaning up after shim disconnected" id=ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da namespace=k8s.io Aug 13 00:54:34.596180 env[1311]: time="2025-08-13T00:54:34.596010970Z" level=info msg="cleaning up dead shim" Aug 13 00:54:34.603023 env[1311]: time="2025-08-13T00:54:34.602971791Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2580 runtime=io.containerd.runc.v2\n" Aug 13 00:54:34.858152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c-rootfs.mount: Deactivated successfully. Aug 13 00:54:35.505105 kubelet[2074]: E0813 00:54:35.505064 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:35.507002 env[1311]: time="2025-08-13T00:54:35.506937079Z" level=info msg="CreateContainer within sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:54:35.756256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2854747805.mount: Deactivated successfully. Aug 13 00:54:35.770900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989174206.mount: Deactivated successfully. Aug 13 00:54:35.773484 env[1311]: time="2025-08-13T00:54:35.772855202Z" level=info msg="CreateContainer within sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b\"" Aug 13 00:54:35.775092 env[1311]: time="2025-08-13T00:54:35.775029423Z" level=info msg="StartContainer for \"6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b\"" Aug 13 00:54:35.833495 env[1311]: time="2025-08-13T00:54:35.833440438Z" level=info msg="StartContainer for \"6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b\" returns successfully" Aug 13 00:54:35.870363 env[1311]: time="2025-08-13T00:54:35.870297384Z" level=info msg="shim disconnected" id=6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b Aug 13 00:54:35.870363 env[1311]: time="2025-08-13T00:54:35.870346615Z" level=warning msg="cleaning up after shim disconnected" id=6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b namespace=k8s.io Aug 13 00:54:35.870363 env[1311]: time="2025-08-13T00:54:35.870354743Z" level=info msg="cleaning up dead shim" Aug 13 00:54:35.877361 env[1311]: time="2025-08-13T00:54:35.877317852Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2637 runtime=io.containerd.runc.v2\n" Aug 13 00:54:36.435982 env[1311]: time="2025-08-13T00:54:36.435900931Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:36.437839 env[1311]: time="2025-08-13T00:54:36.437794141Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:36.439296 env[1311]: time="2025-08-13T00:54:36.439243069Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:36.439850 env[1311]: time="2025-08-13T00:54:36.439792377Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:54:36.442155 env[1311]: time="2025-08-13T00:54:36.442127385Z" level=info msg="CreateContainer within sandbox \"040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:54:36.453983 env[1311]: time="2025-08-13T00:54:36.453719292Z" level=info msg="CreateContainer within sandbox \"040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\"" Aug 13 00:54:36.456506 env[1311]: time="2025-08-13T00:54:36.456458159Z" level=info msg="StartContainer for \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\"" Aug 13 00:54:36.509407 kubelet[2074]: E0813 00:54:36.509370 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:36.514615 env[1311]: time="2025-08-13T00:54:36.514571329Z" level=info msg="CreateContainer within sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:54:36.678960 env[1311]: time="2025-08-13T00:54:36.678885109Z" level=info msg="StartContainer for \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\" returns successfully" Aug 13 00:54:36.700109 env[1311]: time="2025-08-13T00:54:36.699970802Z" level=info msg="CreateContainer within sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68\"" Aug 13 00:54:36.700804 env[1311]: time="2025-08-13T00:54:36.700771577Z" level=info msg="StartContainer for \"70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68\"" Aug 13 00:54:36.799144 env[1311]: time="2025-08-13T00:54:36.799088540Z" level=info msg="StartContainer for \"70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68\" returns successfully" Aug 13 00:54:37.114742 env[1311]: time="2025-08-13T00:54:37.114671896Z" level=info msg="shim disconnected" id=70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68 Aug 13 00:54:37.114742 env[1311]: time="2025-08-13T00:54:37.114736509Z" level=warning msg="cleaning up after shim disconnected" id=70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68 namespace=k8s.io Aug 13 00:54:37.114742 env[1311]: time="2025-08-13T00:54:37.114748123Z" level=info msg="cleaning up dead shim" Aug 13 00:54:37.135855 env[1311]: time="2025-08-13T00:54:37.135753448Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:54:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2727 runtime=io.containerd.runc.v2\n" Aug 13 00:54:37.512414 kubelet[2074]: E0813 00:54:37.512293 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:37.514873 kubelet[2074]: E0813 00:54:37.514855 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:37.516141 env[1311]: time="2025-08-13T00:54:37.516096115Z" level=info msg="CreateContainer within sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:54:37.627564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3118448633.mount: Deactivated successfully. Aug 13 00:54:37.631905 env[1311]: time="2025-08-13T00:54:37.631852695Z" level=info msg="CreateContainer within sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\"" Aug 13 00:54:37.632864 env[1311]: time="2025-08-13T00:54:37.632818954Z" level=info msg="StartContainer for \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\"" Aug 13 00:54:37.644611 kubelet[2074]: I0813 00:54:37.644468 2074 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-fdssn" podStartSLOduration=2.051985734 podStartE2EDuration="14.644441296s" podCreationTimestamp="2025-08-13 00:54:23 +0000 UTC" firstStartedPulling="2025-08-13 00:54:23.848292188 +0000 UTC m=+8.475759225" lastFinishedPulling="2025-08-13 00:54:36.44074762 +0000 UTC m=+21.068214787" observedRunningTime="2025-08-13 00:54:37.62065817 +0000 UTC m=+22.248125207" watchObservedRunningTime="2025-08-13 00:54:37.644441296 +0000 UTC m=+22.271908363" Aug 13 00:54:37.690325 env[1311]: time="2025-08-13T00:54:37.690284548Z" level=info msg="StartContainer for \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\" returns successfully" Aug 13 00:54:37.809640 kubelet[2074]: I0813 00:54:37.809494 2074 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:54:37.997706 kubelet[2074]: I0813 00:54:37.997648 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85g4w\" (UniqueName: \"kubernetes.io/projected/dbd725bd-aa54-4e57-97ca-ad62121cf8d2-kube-api-access-85g4w\") pod \"coredns-7c65d6cfc9-q5nxb\" (UID: \"dbd725bd-aa54-4e57-97ca-ad62121cf8d2\") " pod="kube-system/coredns-7c65d6cfc9-q5nxb" Aug 13 00:54:37.997706 kubelet[2074]: I0813 00:54:37.997705 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbd725bd-aa54-4e57-97ca-ad62121cf8d2-config-volume\") pod \"coredns-7c65d6cfc9-q5nxb\" (UID: \"dbd725bd-aa54-4e57-97ca-ad62121cf8d2\") " pod="kube-system/coredns-7c65d6cfc9-q5nxb" Aug 13 00:54:37.998056 kubelet[2074]: I0813 00:54:37.997734 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6qtf\" (UniqueName: \"kubernetes.io/projected/e58bd440-3743-4b39-b596-8b59fc317f54-kube-api-access-r6qtf\") pod \"coredns-7c65d6cfc9-mlklb\" (UID: \"e58bd440-3743-4b39-b596-8b59fc317f54\") " pod="kube-system/coredns-7c65d6cfc9-mlklb" Aug 13 00:54:37.998056 kubelet[2074]: I0813 00:54:37.997760 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e58bd440-3743-4b39-b596-8b59fc317f54-config-volume\") pod \"coredns-7c65d6cfc9-mlklb\" (UID: \"e58bd440-3743-4b39-b596-8b59fc317f54\") " pod="kube-system/coredns-7c65d6cfc9-mlklb" Aug 13 00:54:38.156372 kubelet[2074]: E0813 00:54:38.155714 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:38.156372 kubelet[2074]: E0813 00:54:38.156164 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:38.158762 env[1311]: time="2025-08-13T00:54:38.158716128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q5nxb,Uid:dbd725bd-aa54-4e57-97ca-ad62121cf8d2,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:38.158898 env[1311]: time="2025-08-13T00:54:38.158799368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mlklb,Uid:e58bd440-3743-4b39-b596-8b59fc317f54,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:38.520144 kubelet[2074]: E0813 00:54:38.519804 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:38.520144 kubelet[2074]: E0813 00:54:38.519955 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:39.522240 kubelet[2074]: E0813 00:54:39.522183 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:40.524207 kubelet[2074]: E0813 00:54:40.524168 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:40.526473 systemd-networkd[1078]: cilium_host: Link UP Aug 13 00:54:40.526674 systemd-networkd[1078]: cilium_net: Link UP Aug 13 00:54:40.526678 systemd-networkd[1078]: cilium_net: Gained carrier Aug 13 00:54:40.526894 systemd-networkd[1078]: cilium_host: Gained carrier Aug 13 00:54:40.528860 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 00:54:40.528936 systemd-networkd[1078]: cilium_host: Gained IPv6LL Aug 13 00:54:40.608940 systemd-networkd[1078]: cilium_net: Gained IPv6LL Aug 13 00:54:40.618895 systemd-networkd[1078]: cilium_vxlan: Link UP Aug 13 00:54:40.618906 systemd-networkd[1078]: cilium_vxlan: Gained carrier Aug 13 00:54:40.818866 kernel: NET: Registered PF_ALG protocol family Aug 13 00:54:41.435454 systemd-networkd[1078]: lxc_health: Link UP Aug 13 00:54:41.444648 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:54:41.444414 systemd-networkd[1078]: lxc_health: Gained carrier Aug 13 00:54:41.707556 systemd-networkd[1078]: lxc6defd0a7bd5a: Link UP Aug 13 00:54:41.716877 kernel: eth0: renamed from tmp56dfe Aug 13 00:54:41.729377 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:54:41.729458 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6defd0a7bd5a: link becomes ready Aug 13 00:54:41.730209 systemd-networkd[1078]: lxc6defd0a7bd5a: Gained carrier Aug 13 00:54:41.735656 systemd-networkd[1078]: lxc06afc2165095: Link UP Aug 13 00:54:41.743864 kernel: eth0: renamed from tmp58ef4 Aug 13 00:54:41.750025 systemd-networkd[1078]: lxc06afc2165095: Gained carrier Aug 13 00:54:41.750851 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc06afc2165095: link becomes ready Aug 13 00:54:42.640006 systemd-networkd[1078]: cilium_vxlan: Gained IPv6LL Aug 13 00:54:42.704047 systemd-networkd[1078]: lxc_health: Gained IPv6LL Aug 13 00:54:42.832082 systemd-networkd[1078]: lxc06afc2165095: Gained IPv6LL Aug 13 00:54:43.002254 kubelet[2074]: E0813 00:54:43.002109 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:43.018045 kubelet[2074]: I0813 00:54:43.017950 2074 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-25552" podStartSLOduration=10.712887144 podStartE2EDuration="21.017926703s" podCreationTimestamp="2025-08-13 00:54:22 +0000 UTC" firstStartedPulling="2025-08-13 00:54:23.541055373 +0000 UTC m=+8.168522400" lastFinishedPulling="2025-08-13 00:54:33.846094922 +0000 UTC m=+18.473561959" observedRunningTime="2025-08-13 00:54:38.536407638 +0000 UTC m=+23.163874675" watchObservedRunningTime="2025-08-13 00:54:43.017926703 +0000 UTC m=+27.645393740" Aug 13 00:54:43.025034 systemd-networkd[1078]: lxc6defd0a7bd5a: Gained IPv6LL Aug 13 00:54:45.131149 kubelet[2074]: I0813 00:54:45.131107 2074 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:54:45.131692 kubelet[2074]: E0813 00:54:45.131664 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:45.252352 env[1311]: time="2025-08-13T00:54:45.252269679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:45.252352 env[1311]: time="2025-08-13T00:54:45.252318897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:45.252740 env[1311]: time="2025-08-13T00:54:45.252342315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:45.252740 env[1311]: time="2025-08-13T00:54:45.252576135Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58ef43fd6e0dcb8379eb9c966c346bc94b3c4374a25adccde2b9ab3b374284e2 pid=3314 runtime=io.containerd.runc.v2 Aug 13 00:54:45.258411 env[1311]: time="2025-08-13T00:54:45.258225861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:45.258411 env[1311]: time="2025-08-13T00:54:45.258263697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:45.258411 env[1311]: time="2025-08-13T00:54:45.258273167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:45.258579 env[1311]: time="2025-08-13T00:54:45.258489062Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56dfe9e06a30186bd5526089bc45f135a630631206d8b19da21e28cb30cbc57c pid=3321 runtime=io.containerd.runc.v2 Aug 13 00:54:45.275410 systemd-resolved[1224]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:54:45.293720 systemd-resolved[1224]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:54:45.307554 env[1311]: time="2025-08-13T00:54:45.306755275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mlklb,Uid:e58bd440-3743-4b39-b596-8b59fc317f54,Namespace:kube-system,Attempt:0,} returns sandbox id \"58ef43fd6e0dcb8379eb9c966c346bc94b3c4374a25adccde2b9ab3b374284e2\"" Aug 13 00:54:45.309651 kubelet[2074]: E0813 00:54:45.309628 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:45.318133 env[1311]: time="2025-08-13T00:54:45.318072612Z" level=info msg="CreateContainer within sandbox \"58ef43fd6e0dcb8379eb9c966c346bc94b3c4374a25adccde2b9ab3b374284e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:54:45.325137 env[1311]: time="2025-08-13T00:54:45.324521617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q5nxb,Uid:dbd725bd-aa54-4e57-97ca-ad62121cf8d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"56dfe9e06a30186bd5526089bc45f135a630631206d8b19da21e28cb30cbc57c\"" Aug 13 00:54:45.325276 kubelet[2074]: E0813 00:54:45.325108 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:45.327474 env[1311]: time="2025-08-13T00:54:45.327061611Z" level=info msg="CreateContainer within sandbox \"56dfe9e06a30186bd5526089bc45f135a630631206d8b19da21e28cb30cbc57c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:54:45.346309 env[1311]: time="2025-08-13T00:54:45.346260278Z" level=info msg="CreateContainer within sandbox \"56dfe9e06a30186bd5526089bc45f135a630631206d8b19da21e28cb30cbc57c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f2bf26317e677e879a5eafbc567947e1431c8d65dbf87a65874e3f777e0d004a\"" Aug 13 00:54:45.346914 env[1311]: time="2025-08-13T00:54:45.346861349Z" level=info msg="StartContainer for \"f2bf26317e677e879a5eafbc567947e1431c8d65dbf87a65874e3f777e0d004a\"" Aug 13 00:54:45.348390 env[1311]: time="2025-08-13T00:54:45.348337590Z" level=info msg="CreateContainer within sandbox \"58ef43fd6e0dcb8379eb9c966c346bc94b3c4374a25adccde2b9ab3b374284e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9a84239f475ca46b450c8d98678be9ba0a3bcec667854bd9b88205e876797e1\"" Aug 13 00:54:45.348723 env[1311]: time="2025-08-13T00:54:45.348697695Z" level=info msg="StartContainer for \"a9a84239f475ca46b450c8d98678be9ba0a3bcec667854bd9b88205e876797e1\"" Aug 13 00:54:45.417941 env[1311]: time="2025-08-13T00:54:45.417806316Z" level=info msg="StartContainer for \"a9a84239f475ca46b450c8d98678be9ba0a3bcec667854bd9b88205e876797e1\" returns successfully" Aug 13 00:54:45.419339 env[1311]: time="2025-08-13T00:54:45.419304832Z" level=info msg="StartContainer for \"f2bf26317e677e879a5eafbc567947e1431c8d65dbf87a65874e3f777e0d004a\" returns successfully" Aug 13 00:54:45.541457 kubelet[2074]: E0813 00:54:45.540454 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:45.543650 kubelet[2074]: E0813 00:54:45.543269 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:45.543650 kubelet[2074]: E0813 00:54:45.543499 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:45.565345 kubelet[2074]: I0813 00:54:45.565031 2074 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mlklb" podStartSLOduration=22.565008365 podStartE2EDuration="22.565008365s" podCreationTimestamp="2025-08-13 00:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:45.552873443 +0000 UTC m=+30.180340480" watchObservedRunningTime="2025-08-13 00:54:45.565008365 +0000 UTC m=+30.192475403" Aug 13 00:54:45.565345 kubelet[2074]: I0813 00:54:45.565175 2074 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q5nxb" podStartSLOduration=22.565166925 podStartE2EDuration="22.565166925s" podCreationTimestamp="2025-08-13 00:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:45.563373183 +0000 UTC m=+30.190840220" watchObservedRunningTime="2025-08-13 00:54:45.565166925 +0000 UTC m=+30.192633982" Aug 13 00:54:46.256531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1655838376.mount: Deactivated successfully. Aug 13 00:54:46.544692 kubelet[2074]: E0813 00:54:46.544570 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:47.546718 kubelet[2074]: E0813 00:54:47.546670 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:48.156685 kubelet[2074]: E0813 00:54:48.156636 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:48.548660 kubelet[2074]: E0813 00:54:48.548536 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:53.425029 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:46582.service. Aug 13 00:54:53.467426 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 46582 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:54:53.468731 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:53.472350 systemd-logind[1294]: New session 6 of user core. Aug 13 00:54:53.473107 systemd[1]: Started session-6.scope. Aug 13 00:54:53.617155 sshd[3466]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:53.619347 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:46582.service: Deactivated successfully. Aug 13 00:54:53.620212 systemd-logind[1294]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:54:53.620234 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:54:53.621104 systemd-logind[1294]: Removed session 6. Aug 13 00:54:58.620462 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:46584.service. Aug 13 00:54:58.660090 sshd[3485]: Accepted publickey for core from 10.0.0.1 port 46584 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:54:58.661047 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:58.664361 systemd-logind[1294]: New session 7 of user core. Aug 13 00:54:58.665327 systemd[1]: Started session-7.scope. Aug 13 00:54:58.774870 sshd[3485]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:58.777044 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:46584.service: Deactivated successfully. Aug 13 00:54:58.778091 systemd-logind[1294]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:54:58.778159 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:54:58.778983 systemd-logind[1294]: Removed session 7. Aug 13 00:55:03.778642 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:37898.service. Aug 13 00:55:03.838693 sshd[3500]: Accepted publickey for core from 10.0.0.1 port 37898 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:03.839910 sshd[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:03.844035 systemd-logind[1294]: New session 8 of user core. Aug 13 00:55:03.844771 systemd[1]: Started session-8.scope. Aug 13 00:55:04.069479 sshd[3500]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:04.072001 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:37898.service: Deactivated successfully. Aug 13 00:55:04.072984 systemd-logind[1294]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:55:04.073030 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:55:04.073737 systemd-logind[1294]: Removed session 8. Aug 13 00:55:09.073367 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:37900.service. Aug 13 00:55:09.114847 sshd[3515]: Accepted publickey for core from 10.0.0.1 port 37900 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:09.116269 sshd[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:09.119857 systemd-logind[1294]: New session 9 of user core. Aug 13 00:55:09.120620 systemd[1]: Started session-9.scope. Aug 13 00:55:09.221602 sshd[3515]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:09.224887 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:37900.service: Deactivated successfully. Aug 13 00:55:09.225890 systemd-logind[1294]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:55:09.225915 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:55:09.226618 systemd-logind[1294]: Removed session 9. Aug 13 00:55:14.225352 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:39976.service. Aug 13 00:55:14.265648 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 39976 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:14.266737 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:14.270127 systemd-logind[1294]: New session 10 of user core. Aug 13 00:55:14.271121 systemd[1]: Started session-10.scope. Aug 13 00:55:14.400551 sshd[3531]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:14.403303 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:39976.service: Deactivated successfully. Aug 13 00:55:14.404244 systemd-logind[1294]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:55:14.404263 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:55:14.405135 systemd-logind[1294]: Removed session 10. Aug 13 00:55:19.404333 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:39992.service. Aug 13 00:55:19.448282 sshd[3548]: Accepted publickey for core from 10.0.0.1 port 39992 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:19.449623 sshd[3548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:19.454118 systemd-logind[1294]: New session 11 of user core. Aug 13 00:55:19.455351 systemd[1]: Started session-11.scope. Aug 13 00:55:19.564945 sshd[3548]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:19.567710 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:39992.service: Deactivated successfully. Aug 13 00:55:19.568660 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:55:19.568683 systemd-logind[1294]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:55:19.569445 systemd-logind[1294]: Removed session 11. Aug 13 00:55:24.568771 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:34590.service. Aug 13 00:55:24.613953 sshd[3565]: Accepted publickey for core from 10.0.0.1 port 34590 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:24.615491 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:24.620241 systemd-logind[1294]: New session 12 of user core. Aug 13 00:55:24.621107 systemd[1]: Started session-12.scope. Aug 13 00:55:24.757531 sshd[3565]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:24.760026 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:34592.service. Aug 13 00:55:24.761093 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:34590.service: Deactivated successfully. Aug 13 00:55:24.762139 systemd-logind[1294]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:55:24.762233 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:55:24.763527 systemd-logind[1294]: Removed session 12. Aug 13 00:55:24.804688 sshd[3578]: Accepted publickey for core from 10.0.0.1 port 34592 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:24.806278 sshd[3578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:24.810137 systemd-logind[1294]: New session 13 of user core. Aug 13 00:55:24.811051 systemd[1]: Started session-13.scope. Aug 13 00:55:24.968255 sshd[3578]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:24.973221 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:34604.service. Aug 13 00:55:24.978440 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:34592.service: Deactivated successfully. Aug 13 00:55:24.979286 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:55:24.981931 systemd-logind[1294]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:55:24.985562 systemd-logind[1294]: Removed session 13. Aug 13 00:55:25.013421 sshd[3592]: Accepted publickey for core from 10.0.0.1 port 34604 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:25.014874 sshd[3592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:25.018749 systemd-logind[1294]: New session 14 of user core. Aug 13 00:55:25.019497 systemd[1]: Started session-14.scope. Aug 13 00:55:25.131958 sshd[3592]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:25.134625 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:34604.service: Deactivated successfully. Aug 13 00:55:25.135598 systemd-logind[1294]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:55:25.135627 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:55:25.136420 systemd-logind[1294]: Removed session 14. Aug 13 00:55:26.450435 kubelet[2074]: E0813 00:55:26.450374 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:27.449928 kubelet[2074]: E0813 00:55:27.449873 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:30.135538 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:43518.service. Aug 13 00:55:30.202559 sshd[3608]: Accepted publickey for core from 10.0.0.1 port 43518 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:30.204455 sshd[3608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:30.209077 systemd-logind[1294]: New session 15 of user core. Aug 13 00:55:30.210251 systemd[1]: Started session-15.scope. Aug 13 00:55:30.368606 sshd[3608]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:30.370763 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:43518.service: Deactivated successfully. Aug 13 00:55:30.371800 systemd-logind[1294]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:55:30.371836 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:55:30.372757 systemd-logind[1294]: Removed session 15. Aug 13 00:55:35.372558 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:43524.service. Aug 13 00:55:35.413597 sshd[3622]: Accepted publickey for core from 10.0.0.1 port 43524 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:35.414780 sshd[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:35.418636 systemd-logind[1294]: New session 16 of user core. Aug 13 00:55:35.419639 systemd[1]: Started session-16.scope. Aug 13 00:55:35.450486 kubelet[2074]: E0813 00:55:35.450457 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:35.527038 sshd[3622]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:35.529813 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:43524.service: Deactivated successfully. Aug 13 00:55:35.530968 systemd-logind[1294]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:55:35.531039 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:55:35.531986 systemd-logind[1294]: Removed session 16. Aug 13 00:55:40.530295 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:47712.service. Aug 13 00:55:40.572527 sshd[3636]: Accepted publickey for core from 10.0.0.1 port 47712 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:40.573917 sshd[3636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:40.577366 systemd-logind[1294]: New session 17 of user core. Aug 13 00:55:40.578130 systemd[1]: Started session-17.scope. Aug 13 00:55:40.683125 sshd[3636]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:40.686284 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:47714.service. Aug 13 00:55:40.687073 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:47712.service: Deactivated successfully. Aug 13 00:55:40.688210 systemd-logind[1294]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:55:40.688300 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:55:40.689295 systemd-logind[1294]: Removed session 17. Aug 13 00:55:40.727508 sshd[3649]: Accepted publickey for core from 10.0.0.1 port 47714 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:40.728867 sshd[3649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:40.732498 systemd-logind[1294]: New session 18 of user core. Aug 13 00:55:40.733496 systemd[1]: Started session-18.scope. Aug 13 00:55:41.417880 sshd[3649]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:41.421112 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:47716.service. Aug 13 00:55:41.421792 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:47714.service: Deactivated successfully. Aug 13 00:55:41.423459 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:55:41.424015 systemd-logind[1294]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:55:41.424816 systemd-logind[1294]: Removed session 18. Aug 13 00:55:41.464779 sshd[3661]: Accepted publickey for core from 10.0.0.1 port 47716 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:41.465950 sshd[3661]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:41.469979 systemd-logind[1294]: New session 19 of user core. Aug 13 00:55:41.471039 systemd[1]: Started session-19.scope. Aug 13 00:55:42.665936 sshd[3661]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:42.668412 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:47724.service. Aug 13 00:55:42.669341 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:47716.service: Deactivated successfully. Aug 13 00:55:42.670353 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:55:42.671003 systemd-logind[1294]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:55:42.672112 systemd-logind[1294]: Removed session 19. Aug 13 00:55:42.710727 sshd[3678]: Accepted publickey for core from 10.0.0.1 port 47724 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:42.712273 sshd[3678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:42.716862 systemd-logind[1294]: New session 20 of user core. Aug 13 00:55:42.717769 systemd[1]: Started session-20.scope. Aug 13 00:55:42.946595 sshd[3678]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:42.949355 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:47736.service. Aug 13 00:55:42.968365 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:47724.service: Deactivated successfully. Aug 13 00:55:42.969312 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:55:42.971031 systemd-logind[1294]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:55:42.972114 systemd-logind[1294]: Removed session 20. Aug 13 00:55:42.999416 sshd[3692]: Accepted publickey for core from 10.0.0.1 port 47736 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:43.001086 sshd[3692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:43.005260 systemd-logind[1294]: New session 21 of user core. Aug 13 00:55:43.006271 systemd[1]: Started session-21.scope. Aug 13 00:55:43.156385 sshd[3692]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:43.159092 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:47736.service: Deactivated successfully. Aug 13 00:55:43.160259 systemd-logind[1294]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:55:43.160339 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:55:43.161192 systemd-logind[1294]: Removed session 21. Aug 13 00:55:48.159904 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:47752.service. Aug 13 00:55:48.200310 sshd[3708]: Accepted publickey for core from 10.0.0.1 port 47752 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:48.201601 sshd[3708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:48.205416 systemd-logind[1294]: New session 22 of user core. Aug 13 00:55:48.206159 systemd[1]: Started session-22.scope. Aug 13 00:55:48.306456 sshd[3708]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:48.308624 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:47752.service: Deactivated successfully. Aug 13 00:55:48.309781 systemd-logind[1294]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:55:48.309866 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:55:48.310703 systemd-logind[1294]: Removed session 22. Aug 13 00:55:50.450264 kubelet[2074]: E0813 00:55:50.450222 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:53.310364 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:39210.service. Aug 13 00:55:53.350156 sshd[3725]: Accepted publickey for core from 10.0.0.1 port 39210 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:53.351148 sshd[3725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:53.354689 systemd-logind[1294]: New session 23 of user core. Aug 13 00:55:53.355655 systemd[1]: Started session-23.scope. Aug 13 00:55:53.452711 sshd[3725]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:53.454762 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:39210.service: Deactivated successfully. Aug 13 00:55:53.455852 systemd-logind[1294]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:55:53.455889 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:55:53.456766 systemd-logind[1294]: Removed session 23. Aug 13 00:55:58.449890 kubelet[2074]: E0813 00:55:58.449818 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:58.456549 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:39222.service. Aug 13 00:55:58.496410 sshd[3741]: Accepted publickey for core from 10.0.0.1 port 39222 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:58.497542 sshd[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:58.500943 systemd-logind[1294]: New session 24 of user core. Aug 13 00:55:58.501760 systemd[1]: Started session-24.scope. Aug 13 00:55:58.602447 sshd[3741]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:58.604652 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:39222.service: Deactivated successfully. Aug 13 00:55:58.605743 systemd-logind[1294]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:55:58.605807 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:55:58.606726 systemd-logind[1294]: Removed session 24. Aug 13 00:55:59.450035 kubelet[2074]: E0813 00:55:59.449992 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:01.450663 kubelet[2074]: E0813 00:56:01.450611 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:03.606117 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:56142.service. Aug 13 00:56:03.647869 sshd[3756]: Accepted publickey for core from 10.0.0.1 port 56142 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:56:03.649286 sshd[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:03.653265 systemd-logind[1294]: New session 25 of user core. Aug 13 00:56:03.654064 systemd[1]: Started session-25.scope. Aug 13 00:56:03.764096 sshd[3756]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:03.767725 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:56148.service. Aug 13 00:56:03.768322 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:56142.service: Deactivated successfully. Aug 13 00:56:03.769797 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:56:03.769953 systemd-logind[1294]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:56:03.770919 systemd-logind[1294]: Removed session 25. Aug 13 00:56:03.809640 sshd[3769]: Accepted publickey for core from 10.0.0.1 port 56148 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:56:03.810926 sshd[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:03.814734 systemd-logind[1294]: New session 26 of user core. Aug 13 00:56:03.815558 systemd[1]: Started session-26.scope. Aug 13 00:56:05.206780 env[1311]: time="2025-08-13T00:56:05.206711866Z" level=info msg="StopContainer for \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\" with timeout 30 (s)" Aug 13 00:56:05.207620 env[1311]: time="2025-08-13T00:56:05.207595938Z" level=info msg="Stop container \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\" with signal terminated" Aug 13 00:56:05.222350 systemd[1]: run-containerd-runc-k8s.io-7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce-runc.0PDFfF.mount: Deactivated successfully. Aug 13 00:56:05.236488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896-rootfs.mount: Deactivated successfully. Aug 13 00:56:05.249485 env[1311]: time="2025-08-13T00:56:05.249431546Z" level=info msg="shim disconnected" id=9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896 Aug 13 00:56:05.249725 env[1311]: time="2025-08-13T00:56:05.249488726Z" level=warning msg="cleaning up after shim disconnected" id=9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896 namespace=k8s.io Aug 13 00:56:05.249725 env[1311]: time="2025-08-13T00:56:05.249503745Z" level=info msg="cleaning up dead shim" Aug 13 00:56:05.253770 env[1311]: time="2025-08-13T00:56:05.253720855Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:56:05.258180 env[1311]: time="2025-08-13T00:56:05.258135154Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3819 runtime=io.containerd.runc.v2\n" Aug 13 00:56:05.258949 env[1311]: time="2025-08-13T00:56:05.258925605Z" level=info msg="StopContainer for \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\" with timeout 2 (s)" Aug 13 00:56:05.259329 env[1311]: time="2025-08-13T00:56:05.259290128Z" level=info msg="Stop container \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\" with signal terminated" Aug 13 00:56:05.261815 env[1311]: time="2025-08-13T00:56:05.261773960Z" level=info msg="StopContainer for \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\" returns successfully" Aug 13 00:56:05.262392 env[1311]: time="2025-08-13T00:56:05.262361111Z" level=info msg="StopPodSandbox for \"040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3\"" Aug 13 00:56:05.262460 env[1311]: time="2025-08-13T00:56:05.262423791Z" level=info msg="Container to stop \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:05.265302 systemd-networkd[1078]: lxc_health: Link DOWN Aug 13 00:56:05.265310 systemd-networkd[1078]: lxc_health: Lost carrier Aug 13 00:56:05.267158 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3-shm.mount: Deactivated successfully. Aug 13 00:56:05.302536 env[1311]: time="2025-08-13T00:56:05.302469865Z" level=info msg="shim disconnected" id=040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3 Aug 13 00:56:05.302822 env[1311]: time="2025-08-13T00:56:05.302799339Z" level=warning msg="cleaning up after shim disconnected" id=040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3 namespace=k8s.io Aug 13 00:56:05.302942 env[1311]: time="2025-08-13T00:56:05.302917967Z" level=info msg="cleaning up dead shim" Aug 13 00:56:05.314713 env[1311]: time="2025-08-13T00:56:05.314655175Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3865 runtime=io.containerd.runc.v2\n" Aug 13 00:56:05.315058 env[1311]: time="2025-08-13T00:56:05.315018786Z" level=info msg="TearDown network for sandbox \"040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3\" successfully" Aug 13 00:56:05.315058 env[1311]: time="2025-08-13T00:56:05.315049204Z" level=info msg="StopPodSandbox for \"040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3\" returns successfully" Aug 13 00:56:05.328655 env[1311]: time="2025-08-13T00:56:05.328594574Z" level=info msg="shim disconnected" id=7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce Aug 13 00:56:05.328655 env[1311]: time="2025-08-13T00:56:05.328647125Z" level=warning msg="cleaning up after shim disconnected" id=7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce namespace=k8s.io Aug 13 00:56:05.328655 env[1311]: time="2025-08-13T00:56:05.328662164Z" level=info msg="cleaning up dead shim" Aug 13 00:56:05.335885 env[1311]: time="2025-08-13T00:56:05.335838912Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3891 runtime=io.containerd.runc.v2\n" Aug 13 00:56:05.338756 env[1311]: time="2025-08-13T00:56:05.338720761Z" level=info msg="StopContainer for \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\" returns successfully" Aug 13 00:56:05.339256 env[1311]: time="2025-08-13T00:56:05.339231513Z" level=info msg="StopPodSandbox for \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\"" Aug 13 00:56:05.339322 env[1311]: time="2025-08-13T00:56:05.339292923Z" level=info msg="Container to stop \"3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:05.339322 env[1311]: time="2025-08-13T00:56:05.339310405Z" level=info msg="Container to stop \"6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:05.339387 env[1311]: time="2025-08-13T00:56:05.339323351Z" level=info msg="Container to stop \"70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:05.339387 env[1311]: time="2025-08-13T00:56:05.339337849Z" level=info msg="Container to stop \"ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:05.339387 env[1311]: time="2025-08-13T00:56:05.339347166Z" level=info msg="Container to stop \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:05.361340 kubelet[2074]: I0813 00:56:05.359807 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-442lw\" (UniqueName: \"kubernetes.io/projected/3bbf3df6-7bf5-434a-9666-adb38c73ef5b-kube-api-access-442lw\") pod \"3bbf3df6-7bf5-434a-9666-adb38c73ef5b\" (UID: \"3bbf3df6-7bf5-434a-9666-adb38c73ef5b\") " Aug 13 00:56:05.361340 kubelet[2074]: I0813 00:56:05.359874 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bbf3df6-7bf5-434a-9666-adb38c73ef5b-cilium-config-path\") pod \"3bbf3df6-7bf5-434a-9666-adb38c73ef5b\" (UID: \"3bbf3df6-7bf5-434a-9666-adb38c73ef5b\") " Aug 13 00:56:05.362485 kubelet[2074]: I0813 00:56:05.362448 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bbf3df6-7bf5-434a-9666-adb38c73ef5b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3bbf3df6-7bf5-434a-9666-adb38c73ef5b" (UID: "3bbf3df6-7bf5-434a-9666-adb38c73ef5b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:56:05.363515 kubelet[2074]: I0813 00:56:05.363488 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bbf3df6-7bf5-434a-9666-adb38c73ef5b-kube-api-access-442lw" (OuterVolumeSpecName: "kube-api-access-442lw") pod "3bbf3df6-7bf5-434a-9666-adb38c73ef5b" (UID: "3bbf3df6-7bf5-434a-9666-adb38c73ef5b"). InnerVolumeSpecName "kube-api-access-442lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:56:05.369633 env[1311]: time="2025-08-13T00:56:05.369580567Z" level=info msg="shim disconnected" id=e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad Aug 13 00:56:05.369798 env[1311]: time="2025-08-13T00:56:05.369662134Z" level=warning msg="cleaning up after shim disconnected" id=e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad namespace=k8s.io Aug 13 00:56:05.369798 env[1311]: time="2025-08-13T00:56:05.369676812Z" level=info msg="cleaning up dead shim" Aug 13 00:56:05.376963 env[1311]: time="2025-08-13T00:56:05.376909227Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3925 runtime=io.containerd.runc.v2\n" Aug 13 00:56:05.377219 env[1311]: time="2025-08-13T00:56:05.377194987Z" level=info msg="TearDown network for sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" successfully" Aug 13 00:56:05.377284 env[1311]: time="2025-08-13T00:56:05.377218863Z" level=info msg="StopPodSandbox for \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" returns successfully" Aug 13 00:56:05.460757 kubelet[2074]: I0813 00:56:05.460648 2074 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bbf3df6-7bf5-434a-9666-adb38c73ef5b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.460757 kubelet[2074]: I0813 00:56:05.460687 2074 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-442lw\" (UniqueName: \"kubernetes.io/projected/3bbf3df6-7bf5-434a-9666-adb38c73ef5b-kube-api-access-442lw\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.509022 kubelet[2074]: E0813 00:56:05.508968 2074 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:56:05.561391 kubelet[2074]: I0813 00:56:05.561347 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-host-proc-sys-net\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.561391 kubelet[2074]: I0813 00:56:05.561375 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:05.561598 kubelet[2074]: I0813 00:56:05.561410 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9462f70-04f9-4661-9383-b6a88e53cc2e-hubble-tls\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.561598 kubelet[2074]: I0813 00:56:05.561436 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-xtables-lock\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.561598 kubelet[2074]: I0813 00:56:05.561453 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-cilium-cgroup\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.561598 kubelet[2074]: I0813 00:56:05.561472 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-etc-cni-netd\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.561598 kubelet[2074]: I0813 00:56:05.561479 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:05.561598 kubelet[2074]: I0813 00:56:05.561496 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9462f70-04f9-4661-9383-b6a88e53cc2e-clustermesh-secrets\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.561849 kubelet[2074]: I0813 00:56:05.561500 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:05.561849 kubelet[2074]: I0813 00:56:05.561513 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-cilium-run\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.561849 kubelet[2074]: I0813 00:56:05.561543 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-cni-path\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.561849 kubelet[2074]: I0813 00:56:05.561561 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-host-proc-sys-kernel\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.561849 kubelet[2074]: I0813 00:56:05.561581 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:05.561849 kubelet[2074]: I0813 00:56:05.561583 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9462f70-04f9-4661-9383-b6a88e53cc2e-cilium-config-path\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.562082 kubelet[2074]: I0813 00:56:05.561604 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpgnh\" (UniqueName: \"kubernetes.io/projected/d9462f70-04f9-4661-9383-b6a88e53cc2e-kube-api-access-jpgnh\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.562082 kubelet[2074]: I0813 00:56:05.561623 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-hostproc\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.562082 kubelet[2074]: I0813 00:56:05.561640 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-bpf-maps\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.562082 kubelet[2074]: I0813 00:56:05.561657 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-lib-modules\") pod \"d9462f70-04f9-4661-9383-b6a88e53cc2e\" (UID: \"d9462f70-04f9-4661-9383-b6a88e53cc2e\") " Aug 13 00:56:05.562082 kubelet[2074]: I0813 00:56:05.561694 2074 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.562082 kubelet[2074]: I0813 00:56:05.561707 2074 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.562082 kubelet[2074]: I0813 00:56:05.561717 2074 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.562329 kubelet[2074]: I0813 00:56:05.561728 2074 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.562329 kubelet[2074]: I0813 00:56:05.561753 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:05.562329 kubelet[2074]: I0813 00:56:05.561777 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-cni-path" (OuterVolumeSpecName: "cni-path") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:05.562329 kubelet[2074]: I0813 00:56:05.561802 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:05.562329 kubelet[2074]: I0813 00:56:05.561805 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:05.562512 kubelet[2074]: I0813 00:56:05.561848 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-hostproc" (OuterVolumeSpecName: "hostproc") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:05.562512 kubelet[2074]: I0813 00:56:05.562186 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:05.564169 kubelet[2074]: I0813 00:56:05.564133 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9462f70-04f9-4661-9383-b6a88e53cc2e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:56:05.564233 kubelet[2074]: I0813 00:56:05.564202 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9462f70-04f9-4661-9383-b6a88e53cc2e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:56:05.564476 kubelet[2074]: I0813 00:56:05.564453 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9462f70-04f9-4661-9383-b6a88e53cc2e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:56:05.564911 kubelet[2074]: I0813 00:56:05.564879 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9462f70-04f9-4661-9383-b6a88e53cc2e-kube-api-access-jpgnh" (OuterVolumeSpecName: "kube-api-access-jpgnh") pod "d9462f70-04f9-4661-9383-b6a88e53cc2e" (UID: "d9462f70-04f9-4661-9383-b6a88e53cc2e"). InnerVolumeSpecName "kube-api-access-jpgnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:56:05.662269 kubelet[2074]: I0813 00:56:05.662211 2074 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.662269 kubelet[2074]: I0813 00:56:05.662246 2074 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9462f70-04f9-4661-9383-b6a88e53cc2e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.662269 kubelet[2074]: I0813 00:56:05.662258 2074 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpgnh\" (UniqueName: \"kubernetes.io/projected/d9462f70-04f9-4661-9383-b6a88e53cc2e-kube-api-access-jpgnh\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.662269 kubelet[2074]: I0813 00:56:05.662267 2074 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.662513 kubelet[2074]: I0813 00:56:05.662293 2074 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.662513 kubelet[2074]: I0813 00:56:05.662306 2074 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.662513 kubelet[2074]: I0813 00:56:05.662316 2074 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9462f70-04f9-4661-9383-b6a88e53cc2e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.662513 kubelet[2074]: I0813 00:56:05.662338 2074 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.662513 kubelet[2074]: I0813 00:56:05.662345 2074 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9462f70-04f9-4661-9383-b6a88e53cc2e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.662513 kubelet[2074]: I0813 00:56:05.662352 2074 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9462f70-04f9-4661-9383-b6a88e53cc2e-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:05.686002 kubelet[2074]: I0813 00:56:05.685975 2074 scope.go:117] "RemoveContainer" containerID="9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896" Aug 13 00:56:05.687437 env[1311]: time="2025-08-13T00:56:05.687408520Z" level=info msg="RemoveContainer for \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\"" Aug 13 00:56:05.691727 env[1311]: time="2025-08-13T00:56:05.691678963Z" level=info msg="RemoveContainer for \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\" returns successfully" Aug 13 00:56:05.692038 kubelet[2074]: I0813 00:56:05.691997 2074 scope.go:117] "RemoveContainer" containerID="9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896" Aug 13 00:56:05.692430 env[1311]: time="2025-08-13T00:56:05.692357038Z" level=error msg="ContainerStatus for \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\": not found" Aug 13 00:56:05.692578 kubelet[2074]: E0813 00:56:05.692544 2074 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\": not found" containerID="9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896" Aug 13 00:56:05.692676 kubelet[2074]: I0813 00:56:05.692587 2074 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896"} err="failed to get container status \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\": rpc error: code = NotFound desc = an error occurred when try to find container \"9cef6becd908d96006ddfa6521821bce2cf20d5fb8ecaee41da4afdcd220a896\": not found" Aug 13 00:56:05.692744 kubelet[2074]: I0813 00:56:05.692681 2074 scope.go:117] "RemoveContainer" containerID="7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce" Aug 13 00:56:05.693717 env[1311]: time="2025-08-13T00:56:05.693689383Z" level=info msg="RemoveContainer for \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\"" Aug 13 00:56:05.696703 env[1311]: time="2025-08-13T00:56:05.696680633Z" level=info msg="RemoveContainer for \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\" returns successfully" Aug 13 00:56:05.696852 kubelet[2074]: I0813 00:56:05.696805 2074 scope.go:117] "RemoveContainer" containerID="70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68" Aug 13 00:56:05.697814 env[1311]: time="2025-08-13T00:56:05.697763247Z" level=info msg="RemoveContainer for \"70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68\"" Aug 13 00:56:05.701805 env[1311]: time="2025-08-13T00:56:05.701565268Z" level=info msg="RemoveContainer for \"70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68\" returns successfully" Aug 13 00:56:05.702040 kubelet[2074]: I0813 00:56:05.701768 2074 scope.go:117] "RemoveContainer" containerID="6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b" Aug 13 00:56:05.703110 env[1311]: time="2025-08-13T00:56:05.703068412Z" level=info msg="RemoveContainer for \"6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b\"" Aug 13 00:56:05.706456 env[1311]: time="2025-08-13T00:56:05.706418963Z" level=info msg="RemoveContainer for \"6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b\" returns successfully" Aug 13 00:56:05.706615 kubelet[2074]: I0813 00:56:05.706591 2074 scope.go:117] "RemoveContainer" containerID="ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da" Aug 13 00:56:05.708460 env[1311]: time="2025-08-13T00:56:05.708433501Z" level=info msg="RemoveContainer for \"ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da\"" Aug 13 00:56:05.712117 env[1311]: time="2025-08-13T00:56:05.712028874Z" level=info msg="RemoveContainer for \"ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da\" returns successfully" Aug 13 00:56:05.712230 kubelet[2074]: I0813 00:56:05.712208 2074 scope.go:117] "RemoveContainer" containerID="3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c" Aug 13 00:56:05.713207 env[1311]: time="2025-08-13T00:56:05.713161485Z" level=info msg="RemoveContainer for \"3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c\"" Aug 13 00:56:05.716277 env[1311]: time="2025-08-13T00:56:05.716249771Z" level=info msg="RemoveContainer for \"3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c\" returns successfully" Aug 13 00:56:05.716409 kubelet[2074]: I0813 00:56:05.716381 2074 scope.go:117] "RemoveContainer" containerID="7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce" Aug 13 00:56:05.716646 env[1311]: time="2025-08-13T00:56:05.716589415Z" level=error msg="ContainerStatus for \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\": not found" Aug 13 00:56:05.716777 kubelet[2074]: E0813 00:56:05.716748 2074 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\": not found" containerID="7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce" Aug 13 00:56:05.716860 kubelet[2074]: I0813 00:56:05.716787 2074 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce"} err="failed to get container status \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce\": not found" Aug 13 00:56:05.716860 kubelet[2074]: I0813 00:56:05.716820 2074 scope.go:117] "RemoveContainer" containerID="70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68" Aug 13 00:56:05.717056 env[1311]: time="2025-08-13T00:56:05.716999675Z" level=error msg="ContainerStatus for \"70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68\": not found" Aug 13 00:56:05.717194 kubelet[2074]: E0813 00:56:05.717168 2074 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68\": not found" containerID="70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68" Aug 13 00:56:05.717249 kubelet[2074]: I0813 00:56:05.717196 2074 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68"} err="failed to get container status \"70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68\": rpc error: code = NotFound desc = an error occurred when try to find container \"70f9785fa0443b6c43087fbe809a3e012bf50896007b33cfe8c846e61c841d68\": not found" Aug 13 00:56:05.717249 kubelet[2074]: I0813 00:56:05.717213 2074 scope.go:117] "RemoveContainer" containerID="6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b" Aug 13 00:56:05.717407 env[1311]: time="2025-08-13T00:56:05.717362102Z" level=error msg="ContainerStatus for \"6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b\": not found" Aug 13 00:56:05.717498 kubelet[2074]: E0813 00:56:05.717478 2074 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b\": not found" containerID="6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b" Aug 13 00:56:05.717558 kubelet[2074]: I0813 00:56:05.717502 2074 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b"} err="failed to get container status \"6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d08116c2ecd3bd6b5efa1e9983f40530598e28fe4523ce0e6e9c946a718d65b\": not found" Aug 13 00:56:05.717558 kubelet[2074]: I0813 00:56:05.717519 2074 scope.go:117] "RemoveContainer" containerID="ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da" Aug 13 00:56:05.717747 env[1311]: time="2025-08-13T00:56:05.717696096Z" level=error msg="ContainerStatus for \"ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da\": not found" Aug 13 00:56:05.717869 kubelet[2074]: E0813 00:56:05.717848 2074 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da\": not found" containerID="ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da" Aug 13 00:56:05.717922 kubelet[2074]: I0813 00:56:05.717871 2074 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da"} err="failed to get container status \"ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed97c3d96ec457cfd2da4f4890bbc0d704bac6d32e2f96de9b732560443b16da\": not found" Aug 13 00:56:05.717922 kubelet[2074]: I0813 00:56:05.717886 2074 scope.go:117] "RemoveContainer" containerID="3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c" Aug 13 00:56:05.718094 env[1311]: time="2025-08-13T00:56:05.718044416Z" level=error msg="ContainerStatus for \"3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c\": not found" Aug 13 00:56:05.718182 kubelet[2074]: E0813 00:56:05.718159 2074 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c\": not found" containerID="3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c" Aug 13 00:56:05.718234 kubelet[2074]: I0813 00:56:05.718185 2074 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c"} err="failed to get container status \"3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b9b4ed53f4605416eb196661033ea786cdf5b3da538affea8ab1d860badff5c\": not found" Aug 13 00:56:06.212395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7063efb231e932ee0763cc0d7c795db9f174ecd3e20d4a3dcd505fb3af8521ce-rootfs.mount: Deactivated successfully. Aug 13 00:56:06.212553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3-rootfs.mount: Deactivated successfully. Aug 13 00:56:06.212648 systemd[1]: var-lib-kubelet-pods-3bbf3df6\x2d7bf5\x2d434a\x2d9666\x2dadb38c73ef5b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d442lw.mount: Deactivated successfully. Aug 13 00:56:06.212749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad-rootfs.mount: Deactivated successfully. Aug 13 00:56:06.212867 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad-shm.mount: Deactivated successfully. Aug 13 00:56:06.212998 systemd[1]: var-lib-kubelet-pods-d9462f70\x2d04f9\x2d4661\x2d9383\x2db6a88e53cc2e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djpgnh.mount: Deactivated successfully. Aug 13 00:56:06.213104 systemd[1]: var-lib-kubelet-pods-d9462f70\x2d04f9\x2d4661\x2d9383\x2db6a88e53cc2e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:56:06.213195 systemd[1]: var-lib-kubelet-pods-d9462f70\x2d04f9\x2d4661\x2d9383\x2db6a88e53cc2e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:56:07.174320 sshd[3769]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:07.177440 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:56152.service. Aug 13 00:56:07.178123 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:56148.service: Deactivated successfully. Aug 13 00:56:07.179989 systemd-logind[1294]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:56:07.179993 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:56:07.180852 systemd-logind[1294]: Removed session 26. Aug 13 00:56:07.224525 sshd[3941]: Accepted publickey for core from 10.0.0.1 port 56152 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:56:07.226437 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:07.230839 systemd-logind[1294]: New session 27 of user core. Aug 13 00:56:07.231729 systemd[1]: Started session-27.scope. Aug 13 00:56:07.451558 kubelet[2074]: I0813 00:56:07.451445 2074 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bbf3df6-7bf5-434a-9666-adb38c73ef5b" path="/var/lib/kubelet/pods/3bbf3df6-7bf5-434a-9666-adb38c73ef5b/volumes" Aug 13 00:56:07.451980 kubelet[2074]: I0813 00:56:07.451934 2074 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9462f70-04f9-4661-9383-b6a88e53cc2e" path="/var/lib/kubelet/pods/d9462f70-04f9-4661-9383-b6a88e53cc2e/volumes" Aug 13 00:56:07.781137 sshd[3941]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:07.781227 systemd[1]: Started sshd@27-10.0.0.15:22-10.0.0.1:56162.service. Aug 13 00:56:07.785504 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:56152.service: Deactivated successfully. Aug 13 00:56:07.788292 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:56:07.788925 systemd-logind[1294]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:56:07.792138 systemd-logind[1294]: Removed session 27. Aug 13 00:56:07.793623 kubelet[2074]: E0813 00:56:07.793592 2074 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9462f70-04f9-4661-9383-b6a88e53cc2e" containerName="mount-cgroup" Aug 13 00:56:07.793623 kubelet[2074]: E0813 00:56:07.793619 2074 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3bbf3df6-7bf5-434a-9666-adb38c73ef5b" containerName="cilium-operator" Aug 13 00:56:07.793623 kubelet[2074]: E0813 00:56:07.793625 2074 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9462f70-04f9-4661-9383-b6a88e53cc2e" containerName="clean-cilium-state" Aug 13 00:56:07.793769 kubelet[2074]: E0813 00:56:07.793630 2074 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9462f70-04f9-4661-9383-b6a88e53cc2e" containerName="cilium-agent" Aug 13 00:56:07.793769 kubelet[2074]: E0813 00:56:07.793637 2074 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9462f70-04f9-4661-9383-b6a88e53cc2e" containerName="apply-sysctl-overwrites" Aug 13 00:56:07.793769 kubelet[2074]: E0813 00:56:07.793642 2074 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9462f70-04f9-4661-9383-b6a88e53cc2e" containerName="mount-bpf-fs" Aug 13 00:56:07.793769 kubelet[2074]: I0813 00:56:07.793674 2074 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9462f70-04f9-4661-9383-b6a88e53cc2e" containerName="cilium-agent" Aug 13 00:56:07.793769 kubelet[2074]: I0813 00:56:07.793680 2074 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bbf3df6-7bf5-434a-9666-adb38c73ef5b" containerName="cilium-operator" Aug 13 00:56:07.836080 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 56162 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:56:07.837422 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:07.841146 systemd-logind[1294]: New session 28 of user core. Aug 13 00:56:07.841979 systemd[1]: Started session-28.scope. Aug 13 00:56:07.971160 sshd[3953]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:07.973288 kubelet[2074]: I0813 00:56:07.973256 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-cni-path\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973288 kubelet[2074]: I0813 00:56:07.973288 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f5475d8-627b-4175-937c-a3c91fe36b4f-hubble-tls\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973288 kubelet[2074]: I0813 00:56:07.973304 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-xtables-lock\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973490 kubelet[2074]: I0813 00:56:07.973318 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-config-path\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973490 kubelet[2074]: I0813 00:56:07.973331 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-host-proc-sys-net\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973490 kubelet[2074]: I0813 00:56:07.973344 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgkdc\" (UniqueName: \"kubernetes.io/projected/4f5475d8-627b-4175-937c-a3c91fe36b4f-kube-api-access-cgkdc\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973490 kubelet[2074]: I0813 00:56:07.973375 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-hostproc\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973490 kubelet[2074]: I0813 00:56:07.973390 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-bpf-maps\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973490 kubelet[2074]: I0813 00:56:07.973416 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-ipsec-secrets\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973645 kubelet[2074]: I0813 00:56:07.973439 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-run\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973645 kubelet[2074]: I0813 00:56:07.973450 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-etc-cni-netd\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973645 kubelet[2074]: I0813 00:56:07.973465 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-lib-modules\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973645 kubelet[2074]: I0813 00:56:07.973491 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-host-proc-sys-kernel\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973645 kubelet[2074]: I0813 00:56:07.973503 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-cgroup\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.973645 kubelet[2074]: I0813 00:56:07.973518 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f5475d8-627b-4175-937c-a3c91fe36b4f-clustermesh-secrets\") pod \"cilium-nd2fg\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " pod="kube-system/cilium-nd2fg" Aug 13 00:56:07.974105 systemd[1]: Started sshd@28-10.0.0.15:22-10.0.0.1:56174.service. Aug 13 00:56:07.974981 systemd[1]: sshd@27-10.0.0.15:22-10.0.0.1:56162.service: Deactivated successfully. Aug 13 00:56:07.978770 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:56:07.978930 systemd-logind[1294]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:56:07.980055 systemd-logind[1294]: Removed session 28. Aug 13 00:56:07.992914 kubelet[2074]: E0813 00:56:07.985080 2074 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-cgkdc lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-nd2fg" podUID="4f5475d8-627b-4175-937c-a3c91fe36b4f" Aug 13 00:56:08.026609 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 56174 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:56:08.028087 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:08.032364 systemd-logind[1294]: New session 29 of user core. Aug 13 00:56:08.033410 systemd[1]: Started session-29.scope. Aug 13 00:56:08.395731 kubelet[2074]: I0813 00:56:08.395682 2074 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:56:08Z","lastTransitionTime":"2025-08-13T00:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:56:08.878252 kubelet[2074]: I0813 00:56:08.878181 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f5475d8-627b-4175-937c-a3c91fe36b4f-hubble-tls\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.878252 kubelet[2074]: I0813 00:56:08.878246 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-hostproc\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.878252 kubelet[2074]: I0813 00:56:08.878269 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-run\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.878757 kubelet[2074]: I0813 00:56:08.878317 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-host-proc-sys-kernel\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.878757 kubelet[2074]: I0813 00:56:08.878366 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-hostproc" (OuterVolumeSpecName: "hostproc") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:08.878757 kubelet[2074]: I0813 00:56:08.878427 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:08.878757 kubelet[2074]: I0813 00:56:08.878392 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:08.878757 kubelet[2074]: I0813 00:56:08.878344 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-cgroup\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.878922 kubelet[2074]: I0813 00:56:08.878489 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-xtables-lock\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.878922 kubelet[2074]: I0813 00:56:08.878539 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:08.878922 kubelet[2074]: I0813 00:56:08.878538 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:08.878922 kubelet[2074]: I0813 00:56:08.878567 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-host-proc-sys-net\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.878922 kubelet[2074]: I0813 00:56:08.878594 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgkdc\" (UniqueName: \"kubernetes.io/projected/4f5475d8-627b-4175-937c-a3c91fe36b4f-kube-api-access-cgkdc\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.878922 kubelet[2074]: I0813 00:56:08.878645 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-bpf-maps\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.879185 kubelet[2074]: I0813 00:56:08.878633 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:08.879185 kubelet[2074]: I0813 00:56:08.878670 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-cni-path\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.879185 kubelet[2074]: I0813 00:56:08.878765 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-config-path\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.879185 kubelet[2074]: I0813 00:56:08.878791 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-ipsec-secrets\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.879185 kubelet[2074]: I0813 00:56:08.878817 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f5475d8-627b-4175-937c-a3c91fe36b4f-clustermesh-secrets\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.879185 kubelet[2074]: I0813 00:56:08.878859 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-etc-cni-netd\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.879334 kubelet[2074]: I0813 00:56:08.878881 2074 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-lib-modules\") pod \"4f5475d8-627b-4175-937c-a3c91fe36b4f\" (UID: \"4f5475d8-627b-4175-937c-a3c91fe36b4f\") " Aug 13 00:56:08.879334 kubelet[2074]: I0813 00:56:08.878924 2074 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.879334 kubelet[2074]: I0813 00:56:08.878940 2074 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.879334 kubelet[2074]: I0813 00:56:08.878953 2074 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.879334 kubelet[2074]: I0813 00:56:08.878965 2074 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.879334 kubelet[2074]: I0813 00:56:08.878976 2074 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.879334 kubelet[2074]: I0813 00:56:08.878988 2074 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.879488 kubelet[2074]: I0813 00:56:08.878706 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:08.879488 kubelet[2074]: I0813 00:56:08.878725 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-cni-path" (OuterVolumeSpecName: "cni-path") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:08.879488 kubelet[2074]: I0813 00:56:08.879013 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:08.879488 kubelet[2074]: I0813 00:56:08.879307 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:56:08.881900 kubelet[2074]: I0813 00:56:08.881861 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:56:08.882501 kubelet[2074]: I0813 00:56:08.882466 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:56:08.883425 kubelet[2074]: I0813 00:56:08.883368 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5475d8-627b-4175-937c-a3c91fe36b4f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:56:08.883968 systemd[1]: var-lib-kubelet-pods-4f5475d8\x2d627b\x2d4175\x2d937c\x2da3c91fe36b4f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:56:08.887777 kubelet[2074]: I0813 00:56:08.885384 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5475d8-627b-4175-937c-a3c91fe36b4f-kube-api-access-cgkdc" (OuterVolumeSpecName: "kube-api-access-cgkdc") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "kube-api-access-cgkdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:56:08.884121 systemd[1]: var-lib-kubelet-pods-4f5475d8\x2d627b\x2d4175\x2d937c\x2da3c91fe36b4f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 00:56:08.886564 systemd[1]: var-lib-kubelet-pods-4f5475d8\x2d627b\x2d4175\x2d937c\x2da3c91fe36b4f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcgkdc.mount: Deactivated successfully. Aug 13 00:56:08.886697 systemd[1]: var-lib-kubelet-pods-4f5475d8\x2d627b\x2d4175\x2d937c\x2da3c91fe36b4f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:56:08.888136 kubelet[2074]: I0813 00:56:08.888092 2074 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5475d8-627b-4175-937c-a3c91fe36b4f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4f5475d8-627b-4175-937c-a3c91fe36b4f" (UID: "4f5475d8-627b-4175-937c-a3c91fe36b4f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:56:08.980189 kubelet[2074]: I0813 00:56:08.980125 2074 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f5475d8-627b-4175-937c-a3c91fe36b4f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.980189 kubelet[2074]: I0813 00:56:08.980168 2074 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.980189 kubelet[2074]: I0813 00:56:08.980179 2074 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.980189 kubelet[2074]: I0813 00:56:08.980190 2074 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f5475d8-627b-4175-937c-a3c91fe36b4f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.980189 kubelet[2074]: I0813 00:56:08.980201 2074 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgkdc\" (UniqueName: \"kubernetes.io/projected/4f5475d8-627b-4175-937c-a3c91fe36b4f-kube-api-access-cgkdc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.980496 kubelet[2074]: I0813 00:56:08.980213 2074 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.980496 kubelet[2074]: I0813 00:56:08.980223 2074 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.980496 kubelet[2074]: I0813 00:56:08.980232 2074 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4f5475d8-627b-4175-937c-a3c91fe36b4f-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:08.980496 kubelet[2074]: I0813 00:56:08.980245 2074 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f5475d8-627b-4175-937c-a3c91fe36b4f-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:09.883323 kubelet[2074]: I0813 00:56:09.883250 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fca97888-cdc3-4b53-89b1-2e5adee0a850-cilium-run\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.883323 kubelet[2074]: I0813 00:56:09.883302 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fca97888-cdc3-4b53-89b1-2e5adee0a850-etc-cni-netd\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.883323 kubelet[2074]: I0813 00:56:09.883324 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fca97888-cdc3-4b53-89b1-2e5adee0a850-clustermesh-secrets\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.883323 kubelet[2074]: I0813 00:56:09.883338 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fca97888-cdc3-4b53-89b1-2e5adee0a850-hostproc\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.883883 kubelet[2074]: I0813 00:56:09.883395 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fca97888-cdc3-4b53-89b1-2e5adee0a850-host-proc-sys-net\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.883883 kubelet[2074]: I0813 00:56:09.883479 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fca97888-cdc3-4b53-89b1-2e5adee0a850-lib-modules\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.883883 kubelet[2074]: I0813 00:56:09.883506 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fca97888-cdc3-4b53-89b1-2e5adee0a850-xtables-lock\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.883883 kubelet[2074]: I0813 00:56:09.883521 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fca97888-cdc3-4b53-89b1-2e5adee0a850-hubble-tls\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.883883 kubelet[2074]: I0813 00:56:09.883539 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fca97888-cdc3-4b53-89b1-2e5adee0a850-bpf-maps\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.883883 kubelet[2074]: I0813 00:56:09.883557 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fca97888-cdc3-4b53-89b1-2e5adee0a850-cni-path\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.884023 kubelet[2074]: I0813 00:56:09.883569 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fca97888-cdc3-4b53-89b1-2e5adee0a850-cilium-ipsec-secrets\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.884023 kubelet[2074]: I0813 00:56:09.883583 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fca97888-cdc3-4b53-89b1-2e5adee0a850-host-proc-sys-kernel\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.884023 kubelet[2074]: I0813 00:56:09.883598 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wznx\" (UniqueName: \"kubernetes.io/projected/fca97888-cdc3-4b53-89b1-2e5adee0a850-kube-api-access-2wznx\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.884023 kubelet[2074]: I0813 00:56:09.883613 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fca97888-cdc3-4b53-89b1-2e5adee0a850-cilium-cgroup\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:09.884023 kubelet[2074]: I0813 00:56:09.883688 2074 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fca97888-cdc3-4b53-89b1-2e5adee0a850-cilium-config-path\") pod \"cilium-q8955\" (UID: \"fca97888-cdc3-4b53-89b1-2e5adee0a850\") " pod="kube-system/cilium-q8955" Aug 13 00:56:10.037325 kubelet[2074]: E0813 00:56:10.037254 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:10.037886 env[1311]: time="2025-08-13T00:56:10.037816430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8955,Uid:fca97888-cdc3-4b53-89b1-2e5adee0a850,Namespace:kube-system,Attempt:0,}" Aug 13 00:56:10.056484 env[1311]: time="2025-08-13T00:56:10.056411699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:56:10.056484 env[1311]: time="2025-08-13T00:56:10.056449512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:56:10.056484 env[1311]: time="2025-08-13T00:56:10.056461365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:56:10.056704 env[1311]: time="2025-08-13T00:56:10.056644288Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/432fba67d1435343c8a4f00e60841f9c7a4a6c42891971514024995b3af721a0 pid=4001 runtime=io.containerd.runc.v2 Aug 13 00:56:10.093275 env[1311]: time="2025-08-13T00:56:10.093207276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8955,Uid:fca97888-cdc3-4b53-89b1-2e5adee0a850,Namespace:kube-system,Attempt:0,} returns sandbox id \"432fba67d1435343c8a4f00e60841f9c7a4a6c42891971514024995b3af721a0\"" Aug 13 00:56:10.094061 kubelet[2074]: E0813 00:56:10.094033 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:10.097343 env[1311]: time="2025-08-13T00:56:10.096878214Z" level=info msg="CreateContainer within sandbox \"432fba67d1435343c8a4f00e60841f9c7a4a6c42891971514024995b3af721a0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:56:10.125358 env[1311]: time="2025-08-13T00:56:10.125263542Z" level=info msg="CreateContainer within sandbox \"432fba67d1435343c8a4f00e60841f9c7a4a6c42891971514024995b3af721a0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1684c3b8f6f15d830cb41e5b0bdaa678ca61e63dc8e1f5e550a2e3e48fb09904\"" Aug 13 00:56:10.126086 env[1311]: time="2025-08-13T00:56:10.126046151Z" level=info msg="StartContainer for \"1684c3b8f6f15d830cb41e5b0bdaa678ca61e63dc8e1f5e550a2e3e48fb09904\"" Aug 13 00:56:10.182859 env[1311]: time="2025-08-13T00:56:10.182715392Z" level=info msg="StartContainer for \"1684c3b8f6f15d830cb41e5b0bdaa678ca61e63dc8e1f5e550a2e3e48fb09904\" returns successfully" Aug 13 00:56:10.214776 env[1311]: time="2025-08-13T00:56:10.214716803Z" level=info msg="shim disconnected" id=1684c3b8f6f15d830cb41e5b0bdaa678ca61e63dc8e1f5e550a2e3e48fb09904 Aug 13 00:56:10.214776 env[1311]: time="2025-08-13T00:56:10.214773091Z" level=warning msg="cleaning up after shim disconnected" id=1684c3b8f6f15d830cb41e5b0bdaa678ca61e63dc8e1f5e550a2e3e48fb09904 namespace=k8s.io Aug 13 00:56:10.214776 env[1311]: time="2025-08-13T00:56:10.214784574Z" level=info msg="cleaning up dead shim" Aug 13 00:56:10.223244 env[1311]: time="2025-08-13T00:56:10.222713554Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4087 runtime=io.containerd.runc.v2\n" Aug 13 00:56:10.510525 kubelet[2074]: E0813 00:56:10.510398 2074 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:56:10.703108 kubelet[2074]: E0813 00:56:10.703081 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:10.704394 env[1311]: time="2025-08-13T00:56:10.704345620Z" level=info msg="CreateContainer within sandbox \"432fba67d1435343c8a4f00e60841f9c7a4a6c42891971514024995b3af721a0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:56:10.716588 env[1311]: time="2025-08-13T00:56:10.716536488Z" level=info msg="CreateContainer within sandbox \"432fba67d1435343c8a4f00e60841f9c7a4a6c42891971514024995b3af721a0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd9491bb6a348c38ded27c83ca5d646c7eae16e883b243ec8f65fbff9d3eb1d0\"" Aug 13 00:56:10.717180 env[1311]: time="2025-08-13T00:56:10.717151945Z" level=info msg="StartContainer for \"dd9491bb6a348c38ded27c83ca5d646c7eae16e883b243ec8f65fbff9d3eb1d0\"" Aug 13 00:56:10.761301 env[1311]: time="2025-08-13T00:56:10.761186542Z" level=info msg="StartContainer for \"dd9491bb6a348c38ded27c83ca5d646c7eae16e883b243ec8f65fbff9d3eb1d0\" returns successfully" Aug 13 00:56:10.786155 env[1311]: time="2025-08-13T00:56:10.786100976Z" level=info msg="shim disconnected" id=dd9491bb6a348c38ded27c83ca5d646c7eae16e883b243ec8f65fbff9d3eb1d0 Aug 13 00:56:10.786155 env[1311]: time="2025-08-13T00:56:10.786153187Z" level=warning msg="cleaning up after shim disconnected" id=dd9491bb6a348c38ded27c83ca5d646c7eae16e883b243ec8f65fbff9d3eb1d0 namespace=k8s.io Aug 13 00:56:10.786429 env[1311]: time="2025-08-13T00:56:10.786167745Z" level=info msg="cleaning up dead shim" Aug 13 00:56:10.793166 env[1311]: time="2025-08-13T00:56:10.793107487Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4148 runtime=io.containerd.runc.v2\n" Aug 13 00:56:11.452296 kubelet[2074]: I0813 00:56:11.452244 2074 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5475d8-627b-4175-937c-a3c91fe36b4f" path="/var/lib/kubelet/pods/4f5475d8-627b-4175-937c-a3c91fe36b4f/volumes" Aug 13 00:56:11.706616 kubelet[2074]: E0813 00:56:11.706371 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:11.708150 env[1311]: time="2025-08-13T00:56:11.708053742Z" level=info msg="CreateContainer within sandbox \"432fba67d1435343c8a4f00e60841f9c7a4a6c42891971514024995b3af721a0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:56:11.722306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2467668548.mount: Deactivated successfully. Aug 13 00:56:11.724055 env[1311]: time="2025-08-13T00:56:11.724010322Z" level=info msg="CreateContainer within sandbox \"432fba67d1435343c8a4f00e60841f9c7a4a6c42891971514024995b3af721a0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aa80c1c715b77ef86f038aa671bd1e421ad9c67320d658900351bb8bdaaf35f1\"" Aug 13 00:56:11.728338 env[1311]: time="2025-08-13T00:56:11.728283755Z" level=info msg="StartContainer for \"aa80c1c715b77ef86f038aa671bd1e421ad9c67320d658900351bb8bdaaf35f1\"" Aug 13 00:56:11.896091 env[1311]: time="2025-08-13T00:56:11.896028103Z" level=info msg="StartContainer for \"aa80c1c715b77ef86f038aa671bd1e421ad9c67320d658900351bb8bdaaf35f1\" returns successfully" Aug 13 00:56:11.972785 env[1311]: time="2025-08-13T00:56:11.972652427Z" level=info msg="shim disconnected" id=aa80c1c715b77ef86f038aa671bd1e421ad9c67320d658900351bb8bdaaf35f1 Aug 13 00:56:11.972785 env[1311]: time="2025-08-13T00:56:11.972716441Z" level=warning msg="cleaning up after shim disconnected" id=aa80c1c715b77ef86f038aa671bd1e421ad9c67320d658900351bb8bdaaf35f1 namespace=k8s.io Aug 13 00:56:11.972785 env[1311]: time="2025-08-13T00:56:11.972727642Z" level=info msg="cleaning up dead shim" Aug 13 00:56:11.979768 env[1311]: time="2025-08-13T00:56:11.979723978Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4205 runtime=io.containerd.runc.v2\n" Aug 13 00:56:11.989224 systemd[1]: run-containerd-runc-k8s.io-aa80c1c715b77ef86f038aa671bd1e421ad9c67320d658900351bb8bdaaf35f1-runc.LwjbCK.mount: Deactivated successfully. Aug 13 00:56:11.989374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa80c1c715b77ef86f038aa671bd1e421ad9c67320d658900351bb8bdaaf35f1-rootfs.mount: Deactivated successfully. Aug 13 00:56:12.709886 kubelet[2074]: E0813 00:56:12.709848 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:12.711543 env[1311]: time="2025-08-13T00:56:12.711493569Z" level=info msg="CreateContainer within sandbox \"432fba67d1435343c8a4f00e60841f9c7a4a6c42891971514024995b3af721a0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:56:12.726201 env[1311]: time="2025-08-13T00:56:12.726084742Z" level=info msg="CreateContainer within sandbox \"432fba67d1435343c8a4f00e60841f9c7a4a6c42891971514024995b3af721a0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"96059b003d32ef5878e035ec4fb63876c926fbe52a2843c7374214e2d971c464\"" Aug 13 00:56:12.728153 env[1311]: time="2025-08-13T00:56:12.727538737Z" level=info msg="StartContainer for \"96059b003d32ef5878e035ec4fb63876c926fbe52a2843c7374214e2d971c464\"" Aug 13 00:56:12.767961 env[1311]: time="2025-08-13T00:56:12.767914471Z" level=info msg="StartContainer for \"96059b003d32ef5878e035ec4fb63876c926fbe52a2843c7374214e2d971c464\" returns successfully" Aug 13 00:56:12.784543 env[1311]: time="2025-08-13T00:56:12.784499221Z" level=info msg="shim disconnected" id=96059b003d32ef5878e035ec4fb63876c926fbe52a2843c7374214e2d971c464 Aug 13 00:56:12.784748 env[1311]: time="2025-08-13T00:56:12.784543968Z" level=warning msg="cleaning up after shim disconnected" id=96059b003d32ef5878e035ec4fb63876c926fbe52a2843c7374214e2d971c464 namespace=k8s.io Aug 13 00:56:12.784748 env[1311]: time="2025-08-13T00:56:12.784552924Z" level=info msg="cleaning up dead shim" Aug 13 00:56:12.791429 env[1311]: time="2025-08-13T00:56:12.791407991Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4260 runtime=io.containerd.runc.v2\n" Aug 13 00:56:12.989041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96059b003d32ef5878e035ec4fb63876c926fbe52a2843c7374214e2d971c464-rootfs.mount: Deactivated successfully. Aug 13 00:56:13.713554 kubelet[2074]: E0813 00:56:13.713432 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:13.715052 env[1311]: time="2025-08-13T00:56:13.714967456Z" level=info msg="CreateContainer within sandbox \"432fba67d1435343c8a4f00e60841f9c7a4a6c42891971514024995b3af721a0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:56:13.733041 env[1311]: time="2025-08-13T00:56:13.732973367Z" level=info msg="CreateContainer within sandbox \"432fba67d1435343c8a4f00e60841f9c7a4a6c42891971514024995b3af721a0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7f4230fe861b4398e305dc8994fe5951453c6f2e0e77f0976c3c82068dc630f4\"" Aug 13 00:56:13.733593 env[1311]: time="2025-08-13T00:56:13.733528499Z" level=info msg="StartContainer for \"7f4230fe861b4398e305dc8994fe5951453c6f2e0e77f0976c3c82068dc630f4\"" Aug 13 00:56:13.773648 env[1311]: time="2025-08-13T00:56:13.773574998Z" level=info msg="StartContainer for \"7f4230fe861b4398e305dc8994fe5951453c6f2e0e77f0976c3c82068dc630f4\" returns successfully" Aug 13 00:56:14.075856 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:56:14.718079 kubelet[2074]: E0813 00:56:14.718045 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:14.733120 kubelet[2074]: I0813 00:56:14.733058 2074 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q8955" podStartSLOduration=5.733039665 podStartE2EDuration="5.733039665s" podCreationTimestamp="2025-08-13 00:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:56:14.732950362 +0000 UTC m=+119.360417430" watchObservedRunningTime="2025-08-13 00:56:14.733039665 +0000 UTC m=+119.360506702" Aug 13 00:56:15.445079 env[1311]: time="2025-08-13T00:56:15.445032937Z" level=info msg="StopPodSandbox for \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\"" Aug 13 00:56:15.445527 env[1311]: time="2025-08-13T00:56:15.445104445Z" level=info msg="TearDown network for sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" successfully" Aug 13 00:56:15.445527 env[1311]: time="2025-08-13T00:56:15.445133421Z" level=info msg="StopPodSandbox for \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" returns successfully" Aug 13 00:56:15.445527 env[1311]: time="2025-08-13T00:56:15.445455172Z" level=info msg="RemovePodSandbox for \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\"" Aug 13 00:56:15.445527 env[1311]: time="2025-08-13T00:56:15.445481293Z" level=info msg="Forcibly stopping sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\"" Aug 13 00:56:15.445688 env[1311]: time="2025-08-13T00:56:15.445550998Z" level=info msg="TearDown network for sandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" successfully" Aug 13 00:56:15.547999 env[1311]: time="2025-08-13T00:56:15.547927354Z" level=info msg="RemovePodSandbox \"e7c3c711f2abea59768774598585fafdca96811ba265f4ab65ce13e3eb5be6ad\" returns successfully" Aug 13 00:56:15.548323 env[1311]: time="2025-08-13T00:56:15.548292429Z" level=info msg="StopPodSandbox for \"040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3\"" Aug 13 00:56:15.548405 env[1311]: time="2025-08-13T00:56:15.548371281Z" level=info msg="TearDown network for sandbox \"040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3\" successfully" Aug 13 00:56:15.548455 env[1311]: time="2025-08-13T00:56:15.548402552Z" level=info msg="StopPodSandbox for \"040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3\" returns successfully" Aug 13 00:56:15.548679 env[1311]: time="2025-08-13T00:56:15.548642274Z" level=info msg="RemovePodSandbox for \"040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3\"" Aug 13 00:56:15.548679 env[1311]: time="2025-08-13T00:56:15.548670870Z" level=info msg="Forcibly stopping sandbox \"040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3\"" Aug 13 00:56:15.548911 env[1311]: time="2025-08-13T00:56:15.548734954Z" level=info msg="TearDown network for sandbox \"040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3\" successfully" Aug 13 00:56:15.692220 env[1311]: time="2025-08-13T00:56:15.692162319Z" level=info msg="RemovePodSandbox \"040656d93d79b9d733e5e088efe4f6ff12cb892e97fd9405d45a41ac3c129fe3\" returns successfully" Aug 13 00:56:16.039001 kubelet[2074]: E0813 00:56:16.038957 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:16.276503 systemd[1]: run-containerd-runc-k8s.io-7f4230fe861b4398e305dc8994fe5951453c6f2e0e77f0976c3c82068dc630f4-runc.BW39ja.mount: Deactivated successfully. Aug 13 00:56:16.978358 systemd-networkd[1078]: lxc_health: Link UP Aug 13 00:56:16.989860 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:56:16.993433 systemd-networkd[1078]: lxc_health: Gained carrier Aug 13 00:56:18.039512 kubelet[2074]: E0813 00:56:18.039467 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:18.415188 systemd[1]: run-containerd-runc-k8s.io-7f4230fe861b4398e305dc8994fe5951453c6f2e0e77f0976c3c82068dc630f4-runc.Po2Lfo.mount: Deactivated successfully. Aug 13 00:56:18.728078 kubelet[2074]: E0813 00:56:18.727944 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:18.896007 systemd-networkd[1078]: lxc_health: Gained IPv6LL Aug 13 00:56:19.729589 kubelet[2074]: E0813 00:56:19.729558 2074 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:22.648314 sshd[3968]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:22.650368 systemd[1]: sshd@28-10.0.0.15:22-10.0.0.1:56174.service: Deactivated successfully. Aug 13 00:56:22.651419 systemd-logind[1294]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:56:22.651498 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:56:22.652473 systemd-logind[1294]: Removed session 29.