Aug 13 00:53:43.155287 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:53:43.155317 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:53:43.155326 kernel: BIOS-provided physical RAM map: Aug 13 00:53:43.155332 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 00:53:43.155337 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 00:53:43.155343 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:53:43.155350 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Aug 13 00:53:43.155356 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Aug 13 00:53:43.155363 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:53:43.155368 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 00:53:43.155374 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:53:43.155379 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:53:43.155385 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 00:53:43.155391 kernel: NX (Execute Disable) protection: active Aug 13 00:53:43.155399 kernel: SMBIOS 2.8 present. Aug 13 00:53:43.155406 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Aug 13 00:53:43.155412 kernel: Hypervisor detected: KVM Aug 13 00:53:43.155417 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:53:43.155426 kernel: kvm-clock: cpu 0, msr 8119e001, primary cpu clock Aug 13 00:53:43.155432 kernel: kvm-clock: using sched offset of 3595204648 cycles Aug 13 00:53:43.155439 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:53:43.155445 kernel: tsc: Detected 2794.750 MHz processor Aug 13 00:53:43.155452 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:53:43.155459 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:53:43.155466 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Aug 13 00:53:43.155472 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:53:43.155478 kernel: Using GB pages for direct mapping Aug 13 00:53:43.155484 kernel: ACPI: Early table checksum verification disabled Aug 13 00:53:43.155490 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Aug 13 00:53:43.155496 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:43.155503 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:43.155509 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:43.155517 kernel: ACPI: FACS 0x000000009CFE0000 000040 Aug 13 00:53:43.155523 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:43.155529 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:43.155535 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:43.155541 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:53:43.155548 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Aug 13 00:53:43.155554 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Aug 13 00:53:43.155560 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Aug 13 00:53:43.155570 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Aug 13 00:53:43.155576 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Aug 13 00:53:43.155583 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Aug 13 00:53:43.155590 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Aug 13 00:53:43.155599 kernel: No NUMA configuration found Aug 13 00:53:43.155607 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Aug 13 00:53:43.155616 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Aug 13 00:53:43.155624 kernel: Zone ranges: Aug 13 00:53:43.155632 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:53:43.155639 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Aug 13 00:53:43.155645 kernel: Normal empty Aug 13 00:53:43.155652 kernel: Movable zone start for each node Aug 13 00:53:43.155659 kernel: Early memory node ranges Aug 13 00:53:43.155665 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:53:43.155672 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Aug 13 00:53:43.155679 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Aug 13 00:53:43.155688 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:53:43.155695 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:53:43.155702 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Aug 13 00:53:43.155709 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:53:43.155716 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:53:43.155723 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:53:43.155729 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:53:43.155736 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:53:43.155743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:53:43.155764 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:53:43.155771 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:53:43.155778 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:53:43.155787 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:53:43.155793 kernel: TSC deadline timer available Aug 13 00:53:43.155800 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 13 00:53:43.155807 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:53:43.155813 kernel: kvm-guest: setup PV sched yield Aug 13 00:53:43.155820 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 00:53:43.155828 kernel: Booting paravirtualized kernel on KVM Aug 13 00:53:43.155835 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:53:43.155842 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Aug 13 00:53:43.155848 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Aug 13 00:53:43.155855 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Aug 13 00:53:43.155862 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 13 00:53:43.155868 kernel: kvm-guest: setup async PF for cpu 0 Aug 13 00:53:43.155875 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Aug 13 00:53:43.155881 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:53:43.155890 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:53:43.155896 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Aug 13 00:53:43.155903 kernel: Policy zone: DMA32 Aug 13 00:53:43.155910 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:53:43.155918 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:53:43.155924 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:53:43.155931 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:53:43.155938 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:53:43.155946 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 134796K reserved, 0K cma-reserved) Aug 13 00:53:43.155953 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:53:43.155960 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:53:43.155966 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:53:43.155973 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:53:43.155980 kernel: rcu: RCU event tracing is enabled. Aug 13 00:53:43.155987 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:53:43.155993 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:53:43.156000 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:53:43.156011 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:53:43.156018 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:53:43.156024 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 13 00:53:43.156031 kernel: random: crng init done Aug 13 00:53:43.156037 kernel: Console: colour VGA+ 80x25 Aug 13 00:53:43.156044 kernel: printk: console [ttyS0] enabled Aug 13 00:53:43.156050 kernel: ACPI: Core revision 20210730 Aug 13 00:53:43.156057 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:53:43.156064 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:53:43.156072 kernel: x2apic enabled Aug 13 00:53:43.156079 kernel: Switched APIC routing to physical x2apic. Aug 13 00:53:43.156087 kernel: kvm-guest: setup PV IPIs Aug 13 00:53:43.156094 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:53:43.156101 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 00:53:43.156119 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 13 00:53:43.156128 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:53:43.156137 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:53:43.156146 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:53:43.156165 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:53:43.156174 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:53:43.156184 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:53:43.156193 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 13 00:53:43.156202 kernel: RETBleed: Mitigation: untrained return thunk Aug 13 00:53:43.156211 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:53:43.156219 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 00:53:43.156226 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:53:43.156233 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:53:43.156241 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:53:43.156248 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:53:43.156255 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:53:43.156262 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:53:43.156269 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:53:43.156276 kernel: LSM: Security Framework initializing Aug 13 00:53:43.156283 kernel: SELinux: Initializing. Aug 13 00:53:43.156291 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:53:43.156298 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:53:43.156305 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 13 00:53:43.156313 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:53:43.156320 kernel: ... version: 0 Aug 13 00:53:43.156326 kernel: ... bit width: 48 Aug 13 00:53:43.156333 kernel: ... generic registers: 6 Aug 13 00:53:43.156340 kernel: ... value mask: 0000ffffffffffff Aug 13 00:53:43.156347 kernel: ... max period: 00007fffffffffff Aug 13 00:53:43.156356 kernel: ... fixed-purpose events: 0 Aug 13 00:53:43.156363 kernel: ... event mask: 000000000000003f Aug 13 00:53:43.156370 kernel: signal: max sigframe size: 1776 Aug 13 00:53:43.156377 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:53:43.156384 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:53:43.156391 kernel: x86: Booting SMP configuration: Aug 13 00:53:43.156398 kernel: .... node #0, CPUs: #1 Aug 13 00:53:43.156405 kernel: kvm-clock: cpu 1, msr 8119e041, secondary cpu clock Aug 13 00:53:43.156412 kernel: kvm-guest: setup async PF for cpu 1 Aug 13 00:53:43.156420 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Aug 13 00:53:43.156427 kernel: #2 Aug 13 00:53:43.156435 kernel: kvm-clock: cpu 2, msr 8119e081, secondary cpu clock Aug 13 00:53:43.156441 kernel: kvm-guest: setup async PF for cpu 2 Aug 13 00:53:43.156448 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Aug 13 00:53:43.156455 kernel: #3 Aug 13 00:53:43.156465 kernel: kvm-clock: cpu 3, msr 8119e0c1, secondary cpu clock Aug 13 00:53:43.156472 kernel: kvm-guest: setup async PF for cpu 3 Aug 13 00:53:43.156479 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Aug 13 00:53:43.156486 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:53:43.156494 kernel: smpboot: Max logical packages: 1 Aug 13 00:53:43.156501 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 13 00:53:43.156508 kernel: devtmpfs: initialized Aug 13 00:53:43.156515 kernel: x86/mm: Memory block size: 128MB Aug 13 00:53:43.156522 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:53:43.156539 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:53:43.156547 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:53:43.156554 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:53:43.156561 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:53:43.156571 kernel: audit: type=2000 audit(1755046421.900:1): state=initialized audit_enabled=0 res=1 Aug 13 00:53:43.156578 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:53:43.156584 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:53:43.156591 kernel: cpuidle: using governor menu Aug 13 00:53:43.156598 kernel: ACPI: bus type PCI registered Aug 13 00:53:43.156605 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:53:43.156612 kernel: dca service started, version 1.12.1 Aug 13 00:53:43.156622 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 00:53:43.156629 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Aug 13 00:53:43.156640 kernel: PCI: Using configuration type 1 for base access Aug 13 00:53:43.156647 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:53:43.156654 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:53:43.156663 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:53:43.156672 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:53:43.156680 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:53:43.156689 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:53:43.156698 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:53:43.156706 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:53:43.156717 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:53:43.156726 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:53:43.156734 kernel: ACPI: Interpreter enabled Aug 13 00:53:43.156743 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:53:43.156784 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:53:43.156887 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:53:43.156894 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:53:43.156901 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:53:43.157095 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:53:43.157215 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:53:43.157298 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:53:43.157307 kernel: PCI host bridge to bus 0000:00 Aug 13 00:53:43.157464 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:53:43.157536 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:53:43.157645 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:53:43.157738 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Aug 13 00:53:43.157832 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:53:43.157934 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Aug 13 00:53:43.158027 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:53:43.158336 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 00:53:43.158434 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 00:53:43.158534 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 13 00:53:43.158717 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 13 00:53:43.158912 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 13 00:53:43.158988 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:53:43.159153 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 00:53:43.159252 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 13 00:53:43.159341 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 13 00:53:43.159420 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 00:53:43.159510 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Aug 13 00:53:43.159587 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 00:53:43.159664 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 13 00:53:43.159738 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 00:53:43.159869 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:53:43.159947 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Aug 13 00:53:43.160024 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Aug 13 00:53:43.160097 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Aug 13 00:53:43.160202 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 13 00:53:43.160299 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 00:53:43.160381 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:53:43.160473 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 00:53:43.160548 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Aug 13 00:53:43.160667 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Aug 13 00:53:43.160832 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 00:53:43.160997 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 00:53:43.161009 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:53:43.161016 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:53:43.161024 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:53:43.161031 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:53:43.161042 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:53:43.161055 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:53:43.161062 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:53:43.161070 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:53:43.161077 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:53:43.161084 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:53:43.161091 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:53:43.161098 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:53:43.161105 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:53:43.161123 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:53:43.161137 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:53:43.161146 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:53:43.161155 kernel: iommu: Default domain type: Translated Aug 13 00:53:43.161164 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:53:43.161258 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:53:43.161334 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:53:43.161406 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:53:43.161416 kernel: vgaarb: loaded Aug 13 00:53:43.161425 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:53:43.161432 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:53:43.161439 kernel: PTP clock support registered Aug 13 00:53:43.161447 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:53:43.161454 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:53:43.161460 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 00:53:43.161468 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Aug 13 00:53:43.161474 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:53:43.161482 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:53:43.161490 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:53:43.161497 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:53:43.161509 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:53:43.161516 kernel: pnp: PnP ACPI init Aug 13 00:53:43.161614 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:53:43.161626 kernel: pnp: PnP ACPI: found 6 devices Aug 13 00:53:43.161633 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:53:43.161640 kernel: NET: Registered PF_INET protocol family Aug 13 00:53:43.161649 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:53:43.161657 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:53:43.161664 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:53:43.161672 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:53:43.161681 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 13 00:53:43.161690 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:53:43.161699 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:53:43.161708 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:53:43.161716 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:53:43.161727 kernel: NET: Registered PF_XDP protocol family Aug 13 00:53:43.161841 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:53:43.161917 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:53:43.161982 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:53:43.162048 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Aug 13 00:53:43.162125 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:53:43.162207 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Aug 13 00:53:43.162218 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:53:43.162228 kernel: Initialise system trusted keyrings Aug 13 00:53:43.162235 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:53:43.162242 kernel: Key type asymmetric registered Aug 13 00:53:43.162249 kernel: Asymmetric key parser 'x509' registered Aug 13 00:53:43.162256 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:53:43.162263 kernel: io scheduler mq-deadline registered Aug 13 00:53:43.162270 kernel: io scheduler kyber registered Aug 13 00:53:43.162277 kernel: io scheduler bfq registered Aug 13 00:53:43.162285 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:53:43.162294 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:53:43.162301 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:53:43.162308 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 00:53:43.162316 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:53:43.162323 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:53:43.162330 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:53:43.162337 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:53:43.162344 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:53:43.162449 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 13 00:53:43.162462 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:53:43.162530 kernel: rtc_cmos 00:04: registered as rtc0 Aug 13 00:53:43.162605 kernel: rtc_cmos 00:04: setting system clock to 2025-08-13T00:53:42 UTC (1755046422) Aug 13 00:53:43.162720 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:53:43.162733 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:53:43.162740 kernel: Segment Routing with IPv6 Aug 13 00:53:43.162747 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:53:43.162799 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:53:43.162810 kernel: Key type dns_resolver registered Aug 13 00:53:43.162821 kernel: IPI shorthand broadcast: enabled Aug 13 00:53:43.162828 kernel: sched_clock: Marking stable (472003030, 101591609)->(633205530, -59610891) Aug 13 00:53:43.162835 kernel: registered taskstats version 1 Aug 13 00:53:43.162842 kernel: Loading compiled-in X.509 certificates Aug 13 00:53:43.162849 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:53:43.162856 kernel: Key type .fscrypt registered Aug 13 00:53:43.162863 kernel: Key type fscrypt-provisioning registered Aug 13 00:53:43.162870 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:53:43.162879 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:53:43.162886 kernel: ima: No architecture policies found Aug 13 00:53:43.162893 kernel: clk: Disabling unused clocks Aug 13 00:53:43.162900 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:53:43.162907 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:53:43.162914 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:53:43.162921 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:53:43.162929 kernel: Run /init as init process Aug 13 00:53:43.162936 kernel: with arguments: Aug 13 00:53:43.162944 kernel: /init Aug 13 00:53:43.162951 kernel: with environment: Aug 13 00:53:43.162957 kernel: HOME=/ Aug 13 00:53:43.162964 kernel: TERM=linux Aug 13 00:53:43.162971 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:53:43.162983 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:53:43.162993 systemd[1]: Detected virtualization kvm. Aug 13 00:53:43.163001 systemd[1]: Detected architecture x86-64. Aug 13 00:53:43.163009 systemd[1]: Running in initrd. Aug 13 00:53:43.163017 systemd[1]: No hostname configured, using default hostname. Aug 13 00:53:43.163024 systemd[1]: Hostname set to . Aug 13 00:53:43.163032 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:53:43.163040 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:53:43.163047 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:53:43.163054 systemd[1]: Reached target cryptsetup.target. Aug 13 00:53:43.163062 systemd[1]: Reached target paths.target. Aug 13 00:53:43.163070 systemd[1]: Reached target slices.target. Aug 13 00:53:43.163078 systemd[1]: Reached target swap.target. Aug 13 00:53:43.163092 systemd[1]: Reached target timers.target. Aug 13 00:53:43.163101 systemd[1]: Listening on iscsid.socket. Aug 13 00:53:43.163121 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:53:43.163133 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:53:43.163143 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:53:43.163153 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:53:43.163163 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:53:43.163174 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:53:43.163184 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:53:43.163194 systemd[1]: Reached target sockets.target. Aug 13 00:53:43.163204 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:53:43.163214 systemd[1]: Finished network-cleanup.service. Aug 13 00:53:43.163225 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:53:43.163233 systemd[1]: Starting systemd-journald.service... Aug 13 00:53:43.163241 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:53:43.163248 systemd[1]: Starting systemd-resolved.service... Aug 13 00:53:43.163256 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:53:43.163264 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:53:43.163272 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:53:43.163279 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:53:43.163287 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:53:43.163302 systemd-journald[198]: Journal started Aug 13 00:53:43.163348 systemd-journald[198]: Runtime Journal (/run/log/journal/ee5c078ef8484ed9b10bd918ad9df3de) is 6.0M, max 48.5M, 42.5M free. Aug 13 00:53:43.154443 systemd-modules-load[199]: Inserted module 'overlay' Aug 13 00:53:43.190465 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:53:43.190494 kernel: audit: type=1130 audit(1755046423.189:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.177220 systemd-resolved[200]: Positive Trust Anchors: Aug 13 00:53:43.195007 systemd[1]: Started systemd-journald.service. Aug 13 00:53:43.195023 kernel: Bridge firewalling registered Aug 13 00:53:43.177240 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:53:43.177269 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:53:43.180201 systemd-resolved[200]: Defaulting to hostname 'linux'. Aug 13 00:53:43.193432 systemd-modules-load[199]: Inserted module 'br_netfilter' Aug 13 00:53:43.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.204339 systemd[1]: Started systemd-resolved.service. Aug 13 00:53:43.209594 kernel: audit: type=1130 audit(1755046423.203:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.209624 kernel: audit: type=1130 audit(1755046423.208:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.209805 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:53:43.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.214191 systemd[1]: Reached target nss-lookup.target. Aug 13 00:53:43.219040 kernel: audit: type=1130 audit(1755046423.213:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.217972 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:53:43.223770 kernel: SCSI subsystem initialized Aug 13 00:53:43.230504 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:53:43.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.236054 kernel: audit: type=1130 audit(1755046423.230:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.232294 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:53:43.241299 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:53:43.241320 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:53:43.241329 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:53:43.241554 dracut-cmdline[217]: dracut-dracut-053 Aug 13 00:53:43.243639 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:53:43.248612 systemd-modules-load[199]: Inserted module 'dm_multipath' Aug 13 00:53:43.249951 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:53:43.254883 kernel: audit: type=1130 audit(1755046423.250:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.254084 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:53:43.269840 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:53:43.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.275798 kernel: audit: type=1130 audit(1755046423.270:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.322792 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:53:43.338787 kernel: iscsi: registered transport (tcp) Aug 13 00:53:43.361044 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:53:43.361091 kernel: QLogic iSCSI HBA Driver Aug 13 00:53:43.386878 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:53:43.393216 kernel: audit: type=1130 audit(1755046423.387:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.388801 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:53:43.437784 kernel: raid6: avx2x4 gen() 28549 MB/s Aug 13 00:53:43.454789 kernel: raid6: avx2x4 xor() 7084 MB/s Aug 13 00:53:43.471774 kernel: raid6: avx2x2 gen() 31823 MB/s Aug 13 00:53:43.488777 kernel: raid6: avx2x2 xor() 18760 MB/s Aug 13 00:53:43.505772 kernel: raid6: avx2x1 gen() 25438 MB/s Aug 13 00:53:43.522782 kernel: raid6: avx2x1 xor() 14772 MB/s Aug 13 00:53:43.539782 kernel: raid6: sse2x4 gen() 14050 MB/s Aug 13 00:53:43.556784 kernel: raid6: sse2x4 xor() 7271 MB/s Aug 13 00:53:43.573790 kernel: raid6: sse2x2 gen() 15237 MB/s Aug 13 00:53:43.590789 kernel: raid6: sse2x2 xor() 9364 MB/s Aug 13 00:53:43.607818 kernel: raid6: sse2x1 gen() 11771 MB/s Aug 13 00:53:43.625127 kernel: raid6: sse2x1 xor() 7427 MB/s Aug 13 00:53:43.625173 kernel: raid6: using algorithm avx2x2 gen() 31823 MB/s Aug 13 00:53:43.625189 kernel: raid6: .... xor() 18760 MB/s, rmw enabled Aug 13 00:53:43.625878 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:53:43.638789 kernel: xor: automatically using best checksumming function avx Aug 13 00:53:43.731814 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:53:43.740195 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:53:43.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.743000 audit: BPF prog-id=7 op=LOAD Aug 13 00:53:43.744775 kernel: audit: type=1130 audit(1755046423.740:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.743000 audit: BPF prog-id=8 op=LOAD Aug 13 00:53:43.745032 systemd[1]: Starting systemd-udevd.service... Aug 13 00:53:43.757560 systemd-udevd[400]: Using default interface naming scheme 'v252'. Aug 13 00:53:43.762049 systemd[1]: Started systemd-udevd.service. Aug 13 00:53:43.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.764617 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:53:43.775277 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Aug 13 00:53:43.797803 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:53:43.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.799465 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:53:43.838976 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:53:43.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:43.868888 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:53:43.891416 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:53:43.891434 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:53:43.891451 kernel: libata version 3.00 loaded. Aug 13 00:53:43.891467 kernel: AES CTR mode by8 optimization enabled Aug 13 00:53:43.891476 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:53:43.891485 kernel: GPT:9289727 != 19775487 Aug 13 00:53:43.891494 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:53:43.891503 kernel: GPT:9289727 != 19775487 Aug 13 00:53:43.891511 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:53:43.891519 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:43.892993 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:53:43.923572 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:53:43.923590 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 00:53:43.923724 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:53:43.923825 kernel: scsi host0: ahci Aug 13 00:53:43.923925 kernel: scsi host1: ahci Aug 13 00:53:43.924083 kernel: scsi host2: ahci Aug 13 00:53:43.924218 kernel: scsi host3: ahci Aug 13 00:53:43.924340 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (441) Aug 13 00:53:43.924351 kernel: scsi host4: ahci Aug 13 00:53:43.924445 kernel: scsi host5: ahci Aug 13 00:53:43.924533 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Aug 13 00:53:43.924543 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Aug 13 00:53:43.924551 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Aug 13 00:53:43.924560 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Aug 13 00:53:43.924572 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Aug 13 00:53:43.924581 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Aug 13 00:53:43.922385 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:53:43.962563 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:53:43.975963 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:53:43.978396 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:53:43.985999 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:53:43.989211 systemd[1]: Starting disk-uuid.service... Aug 13 00:53:43.999211 disk-uuid[520]: Primary Header is updated. Aug 13 00:53:43.999211 disk-uuid[520]: Secondary Entries is updated. Aug 13 00:53:43.999211 disk-uuid[520]: Secondary Header is updated. Aug 13 00:53:44.003774 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:44.007771 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:44.010775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:44.235845 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:44.235927 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:44.236798 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:44.237797 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 13 00:53:44.238788 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:44.239799 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 13 00:53:44.241178 kernel: ata3.00: applying bridge limits Aug 13 00:53:44.241786 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:53:44.242793 kernel: ata3.00: configured for UDMA/100 Aug 13 00:53:44.243796 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 00:53:44.276970 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 13 00:53:44.294477 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:53:44.294496 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 00:53:45.008548 disk-uuid[521]: The operation has completed successfully. Aug 13 00:53:45.009896 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:53:45.033919 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:53:45.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.034001 systemd[1]: Finished disk-uuid.service. Aug 13 00:53:45.042094 systemd[1]: Starting verity-setup.service... Aug 13 00:53:45.055778 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 00:53:45.076588 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:53:45.080143 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:53:45.082106 systemd[1]: Finished verity-setup.service. Aug 13 00:53:45.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.185378 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:53:45.186913 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:53:45.186991 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:53:45.189068 systemd[1]: Starting ignition-setup.service... Aug 13 00:53:45.191078 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:53:45.198484 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:53:45.198522 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:53:45.198533 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:53:45.206675 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:53:45.246496 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:53:45.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.281000 audit: BPF prog-id=9 op=LOAD Aug 13 00:53:45.282703 systemd[1]: Starting systemd-networkd.service... Aug 13 00:53:45.303714 systemd-networkd[707]: lo: Link UP Aug 13 00:53:45.303722 systemd-networkd[707]: lo: Gained carrier Aug 13 00:53:45.304221 systemd-networkd[707]: Enumeration completed Aug 13 00:53:45.304508 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:53:45.304791 systemd[1]: Started systemd-networkd.service. Aug 13 00:53:45.305889 systemd-networkd[707]: eth0: Link UP Aug 13 00:53:45.305894 systemd-networkd[707]: eth0: Gained carrier Aug 13 00:53:45.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.311765 systemd[1]: Reached target network.target. Aug 13 00:53:45.314619 systemd[1]: Starting iscsiuio.service... Aug 13 00:53:45.319308 systemd[1]: Started iscsiuio.service. Aug 13 00:53:45.320892 systemd-networkd[707]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:53:45.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.323639 systemd[1]: Starting iscsid.service... Aug 13 00:53:45.326826 iscsid[712]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:53:45.326826 iscsid[712]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:53:45.326826 iscsid[712]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:53:45.326826 iscsid[712]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:53:45.326826 iscsid[712]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:53:45.326826 iscsid[712]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:53:45.334007 systemd[1]: Started iscsid.service. Aug 13 00:53:45.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.340669 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:53:45.350735 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:53:45.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.352457 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:53:45.354044 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:53:45.355730 systemd[1]: Reached target remote-fs.target. Aug 13 00:53:45.357948 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:53:45.364708 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:53:45.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.418353 systemd[1]: Finished ignition-setup.service. Aug 13 00:53:45.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.420917 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:53:45.456096 ignition[727]: Ignition 2.14.0 Aug 13 00:53:45.456107 ignition[727]: Stage: fetch-offline Aug 13 00:53:45.456168 ignition[727]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:45.456179 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:45.456285 ignition[727]: parsed url from cmdline: "" Aug 13 00:53:45.456288 ignition[727]: no config URL provided Aug 13 00:53:45.456292 ignition[727]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:53:45.456299 ignition[727]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:53:45.456318 ignition[727]: op(1): [started] loading QEMU firmware config module Aug 13 00:53:45.456322 ignition[727]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:53:45.465638 ignition[727]: op(1): [finished] loading QEMU firmware config module Aug 13 00:53:45.503465 ignition[727]: parsing config with SHA512: 1e0b48db294d5fefc063133929714c0391c5b7b6a01fc09fcca1cc14dd803ed33457a5e0cb18a01e90048afbc7c724eeff72b401c4468a7ffc7cd97fad553571 Aug 13 00:53:45.510559 unknown[727]: fetched base config from "system" Aug 13 00:53:45.510572 unknown[727]: fetched user config from "qemu" Aug 13 00:53:45.511080 ignition[727]: fetch-offline: fetch-offline passed Aug 13 00:53:45.512654 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:53:45.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.511141 ignition[727]: Ignition finished successfully Aug 13 00:53:45.514343 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:53:45.515200 systemd[1]: Starting ignition-kargs.service... Aug 13 00:53:45.527831 ignition[735]: Ignition 2.14.0 Aug 13 00:53:45.527844 ignition[735]: Stage: kargs Aug 13 00:53:45.527967 ignition[735]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:45.527981 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:45.530366 systemd[1]: Finished ignition-kargs.service. Aug 13 00:53:45.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.529177 ignition[735]: kargs: kargs passed Aug 13 00:53:45.532781 systemd[1]: Starting ignition-disks.service... Aug 13 00:53:45.529231 ignition[735]: Ignition finished successfully Aug 13 00:53:45.541424 ignition[741]: Ignition 2.14.0 Aug 13 00:53:45.541436 ignition[741]: Stage: disks Aug 13 00:53:45.541555 ignition[741]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:45.541568 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:45.543925 systemd[1]: Finished ignition-disks.service. Aug 13 00:53:45.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.542981 ignition[741]: disks: disks passed Aug 13 00:53:45.545680 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:53:45.543035 ignition[741]: Ignition finished successfully Aug 13 00:53:45.547179 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:53:45.547987 systemd[1]: Reached target local-fs.target. Aug 13 00:53:45.549361 systemd[1]: Reached target sysinit.target. Aug 13 00:53:45.549743 systemd[1]: Reached target basic.target. Aug 13 00:53:45.550846 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:53:45.598186 systemd-fsck[749]: ROOT: clean, 629/553520 files, 56027/553472 blocks Aug 13 00:53:45.730257 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:53:45.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.731645 systemd[1]: Mounting sysroot.mount... Aug 13 00:53:45.738793 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:53:45.739467 systemd[1]: Mounted sysroot.mount. Aug 13 00:53:45.740417 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:53:45.743027 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:53:45.744720 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 00:53:45.744777 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:53:45.744806 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:53:45.747646 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:53:45.749805 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:53:45.754300 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:53:45.757779 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:53:45.760641 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:53:45.764490 initrd-setup-root[783]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:53:45.790065 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:53:45.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.791587 systemd[1]: Starting ignition-mount.service... Aug 13 00:53:45.792720 systemd[1]: Starting sysroot-boot.service... Aug 13 00:53:45.796912 bash[800]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 00:53:45.804177 ignition[801]: INFO : Ignition 2.14.0 Aug 13 00:53:45.804177 ignition[801]: INFO : Stage: mount Aug 13 00:53:45.805785 ignition[801]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:45.805785 ignition[801]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:45.808660 ignition[801]: INFO : mount: mount passed Aug 13 00:53:45.809438 ignition[801]: INFO : Ignition finished successfully Aug 13 00:53:45.810745 systemd[1]: Finished ignition-mount.service. Aug 13 00:53:45.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:45.813190 systemd[1]: Finished sysroot-boot.service. Aug 13 00:53:45.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:46.115878 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:53:46.122606 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Aug 13 00:53:46.122637 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:53:46.122647 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:53:46.123385 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:53:46.127139 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:53:46.129349 systemd[1]: Starting ignition-files.service... Aug 13 00:53:46.142607 ignition[830]: INFO : Ignition 2.14.0 Aug 13 00:53:46.142607 ignition[830]: INFO : Stage: files Aug 13 00:53:46.144286 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:46.144286 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:46.144286 ignition[830]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:53:46.147921 ignition[830]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:53:46.147921 ignition[830]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:53:46.147921 ignition[830]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:53:46.147921 ignition[830]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:53:46.147921 ignition[830]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:53:46.147536 unknown[830]: wrote ssh authorized keys file for user: core Aug 13 00:53:46.155818 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:53:46.155818 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 00:53:46.180627 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:53:46.290202 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:53:46.297070 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:53:46.297070 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:53:46.396832 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:53:46.411977 systemd-networkd[707]: eth0: Gained IPv6LL Aug 13 00:53:46.522215 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:53:46.522215 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:53:46.526207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:53:46.526207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:53:46.526207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:53:46.526207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:53:46.526207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:53:46.526207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:53:46.526207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:53:46.526207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:53:46.526207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:53:46.526207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:53:46.526207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:53:46.526207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:53:46.526207 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 00:53:46.920050 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:53:47.239120 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:53:47.239120 ignition[830]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:53:47.243817 ignition[830]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:53:47.243817 ignition[830]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:53:47.243817 ignition[830]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:53:47.243817 ignition[830]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 00:53:47.243817 ignition[830]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:53:47.243817 ignition[830]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:53:47.243817 ignition[830]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 00:53:47.243817 ignition[830]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:53:47.243817 ignition[830]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:53:47.243817 ignition[830]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:53:47.243817 ignition[830]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:53:47.270267 kernel: kauditd_printk_skb: 23 callbacks suppressed Aug 13 00:53:47.270290 kernel: audit: type=1130 audit(1755046427.263:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.270364 ignition[830]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:53:47.270364 ignition[830]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:53:47.270364 ignition[830]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:53:47.270364 ignition[830]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:53:47.270364 ignition[830]: INFO : files: files passed Aug 13 00:53:47.270364 ignition[830]: INFO : Ignition finished successfully Aug 13 00:53:47.291141 kernel: audit: type=1130 audit(1755046427.274:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.291170 kernel: audit: type=1130 audit(1755046427.280:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.291184 kernel: audit: type=1131 audit(1755046427.280:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.262029 systemd[1]: Finished ignition-files.service. Aug 13 00:53:47.264639 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:53:47.293600 initrd-setup-root-after-ignition[853]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Aug 13 00:53:47.270272 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:53:47.296961 initrd-setup-root-after-ignition[855]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:53:47.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.270896 systemd[1]: Starting ignition-quench.service... Aug 13 00:53:47.307162 kernel: audit: type=1130 audit(1755046427.299:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.307188 kernel: audit: type=1131 audit(1755046427.299:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.272832 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:53:47.275176 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:53:47.275242 systemd[1]: Finished ignition-quench.service. Aug 13 00:53:47.280511 systemd[1]: Reached target ignition-complete.target. Aug 13 00:53:47.286909 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:53:47.298248 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:53:47.298327 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:53:47.299376 systemd[1]: Reached target initrd-fs.target. Aug 13 00:53:47.305562 systemd[1]: Reached target initrd.target. Aug 13 00:53:47.307192 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:53:47.308001 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:53:47.338809 kernel: audit: type=1130 audit(1755046427.334:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.316874 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:53:47.335266 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:53:47.343416 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:53:47.344348 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:53:47.345945 systemd[1]: Stopped target timers.target. Aug 13 00:53:47.347510 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:53:47.353567 kernel: audit: type=1131 audit(1755046427.348:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.347601 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:53:47.349130 systemd[1]: Stopped target initrd.target. Aug 13 00:53:47.353669 systemd[1]: Stopped target basic.target. Aug 13 00:53:47.355269 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:53:47.356910 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:53:47.359245 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:53:47.360984 systemd[1]: Stopped target remote-fs.target. Aug 13 00:53:47.362766 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:53:47.364796 systemd[1]: Stopped target sysinit.target. Aug 13 00:53:47.366528 systemd[1]: Stopped target local-fs.target. Aug 13 00:53:47.368340 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:53:47.370104 systemd[1]: Stopped target swap.target. Aug 13 00:53:47.378103 kernel: audit: type=1131 audit(1755046427.372:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.371778 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:53:47.371936 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:53:47.384322 kernel: audit: type=1131 audit(1755046427.379:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.373769 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:53:47.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.378171 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:53:47.378294 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:53:47.380174 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:53:47.380294 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:53:47.384517 systemd[1]: Stopped target paths.target. Aug 13 00:53:47.385925 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:53:47.387841 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:53:47.389518 systemd[1]: Stopped target slices.target. Aug 13 00:53:47.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.391103 systemd[1]: Stopped target sockets.target. Aug 13 00:53:47.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.392554 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:53:47.392618 systemd[1]: Closed iscsid.socket. Aug 13 00:53:47.394276 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:53:47.394372 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:53:47.396087 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:53:47.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.396174 systemd[1]: Stopped ignition-files.service. Aug 13 00:53:47.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.398339 systemd[1]: Stopping ignition-mount.service... Aug 13 00:53:47.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.408951 ignition[870]: INFO : Ignition 2.14.0 Aug 13 00:53:47.408951 ignition[870]: INFO : Stage: umount Aug 13 00:53:47.408951 ignition[870]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:53:47.408951 ignition[870]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:53:47.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.399923 systemd[1]: Stopping iscsiuio.service... Aug 13 00:53:47.417267 ignition[870]: INFO : umount: umount passed Aug 13 00:53:47.417267 ignition[870]: INFO : Ignition finished successfully Aug 13 00:53:47.400949 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:53:47.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.401091 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:53:47.403751 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:53:47.404558 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:53:47.404660 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:53:47.406542 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:53:47.406660 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:53:47.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.410293 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 00:53:47.410386 systemd[1]: Stopped iscsiuio.service. Aug 13 00:53:47.412100 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:53:47.438000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:53:47.412176 systemd[1]: Stopped ignition-mount.service. Aug 13 00:53:47.414928 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:53:47.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.415015 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:53:47.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.417751 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:53:47.417979 systemd[1]: Stopped target network.target. Aug 13 00:53:47.419099 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:53:47.419129 systemd[1]: Closed iscsiuio.socket. Aug 13 00:53:47.420969 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:53:47.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.421012 systemd[1]: Stopped ignition-disks.service. Aug 13 00:53:47.421283 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:53:47.421313 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:53:47.421528 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:53:47.421560 systemd[1]: Stopped ignition-setup.service. Aug 13 00:53:47.421826 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:53:47.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.421998 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:53:47.430831 systemd-networkd[707]: eth0: DHCPv6 lease lost Aug 13 00:53:47.465000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:53:47.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.431411 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:53:47.431553 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:53:47.433747 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:53:47.433874 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:53:47.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.436034 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:53:47.436080 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:53:47.439006 systemd[1]: Stopping network-cleanup.service... Aug 13 00:53:47.441055 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:53:47.441115 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:53:47.443175 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:53:47.443225 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:53:47.445385 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:53:47.445432 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:53:47.446620 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:53:47.449516 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:53:47.454016 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:53:47.454124 systemd[1]: Stopped network-cleanup.service. Aug 13 00:53:47.460942 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:53:47.461130 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:53:47.463750 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:53:47.463825 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:53:47.465578 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:53:47.465619 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:53:47.466167 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:53:47.466216 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:53:47.466557 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:53:47.466600 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:53:47.467111 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:53:47.467154 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:53:47.468539 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:53:47.469171 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:53:47.469228 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:53:47.475030 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:53:47.475132 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:53:47.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.506598 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:53:47.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:47.506696 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:53:47.508588 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:53:47.510449 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:53:47.510489 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:53:47.512157 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:53:47.529611 systemd[1]: Switching root. Aug 13 00:53:47.549745 iscsid[712]: iscsid shutting down. Aug 13 00:53:47.550592 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Aug 13 00:53:47.550635 systemd-journald[198]: Journal stopped Aug 13 00:53:51.852095 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:53:51.852161 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:53:51.852173 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:53:51.852185 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:53:51.852195 kernel: SELinux: policy capability open_perms=1 Aug 13 00:53:51.852206 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:53:51.852222 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:53:51.852231 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:53:51.852242 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:53:51.852252 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:53:51.852261 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:53:51.852278 systemd[1]: Successfully loaded SELinux policy in 39.049ms. Aug 13 00:53:51.852297 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.209ms. Aug 13 00:53:51.852309 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:53:51.852320 systemd[1]: Detected virtualization kvm. Aug 13 00:53:51.852332 systemd[1]: Detected architecture x86-64. Aug 13 00:53:51.852342 systemd[1]: Detected first boot. Aug 13 00:53:51.852353 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:53:51.852363 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:53:51.852373 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:53:51.852384 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:53:51.852396 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:53:51.852413 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:53:51.852428 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 00:53:51.852441 systemd[1]: Stopped iscsid.service. Aug 13 00:53:51.852452 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:53:51.852462 systemd[1]: Stopped initrd-switch-root.service. Aug 13 00:53:51.852474 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:53:51.852485 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:53:51.852496 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:53:51.852506 systemd[1]: Created slice system-getty.slice. Aug 13 00:53:51.852517 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:53:51.852527 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:53:51.852541 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:53:51.852551 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:53:51.852562 systemd[1]: Created slice user.slice. Aug 13 00:53:51.852574 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:53:51.852584 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:53:51.852595 systemd[1]: Set up automount boot.automount. Aug 13 00:53:51.852605 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:53:51.852616 systemd[1]: Stopped target initrd-switch-root.target. Aug 13 00:53:51.852629 systemd[1]: Stopped target initrd-fs.target. Aug 13 00:53:51.852639 systemd[1]: Stopped target initrd-root-fs.target. Aug 13 00:53:51.852652 systemd[1]: Reached target integritysetup.target. Aug 13 00:53:51.852663 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:53:51.852675 systemd[1]: Reached target remote-fs.target. Aug 13 00:53:51.852685 systemd[1]: Reached target slices.target. Aug 13 00:53:51.852696 systemd[1]: Reached target swap.target. Aug 13 00:53:51.852706 systemd[1]: Reached target torcx.target. Aug 13 00:53:51.852717 systemd[1]: Reached target veritysetup.target. Aug 13 00:53:51.852727 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:53:51.852738 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:53:51.852749 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:53:51.852793 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:53:51.852805 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:53:51.852815 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:53:51.852825 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:53:51.852837 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:53:51.852847 systemd[1]: Mounting media.mount... Aug 13 00:53:51.852858 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:51.852875 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:53:51.852886 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:53:51.852900 systemd[1]: Mounting tmp.mount... Aug 13 00:53:51.852910 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:53:51.852921 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:51.852932 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:53:51.852944 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:53:51.852954 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:51.852965 systemd[1]: Starting modprobe@drm.service... Aug 13 00:53:51.852976 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:51.852986 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:53:51.852997 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:51.853009 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:53:51.853020 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:53:51.853031 systemd[1]: Stopped systemd-fsck-root.service. Aug 13 00:53:51.853042 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:53:51.853052 kernel: loop: module loaded Aug 13 00:53:51.853063 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:53:51.853076 systemd[1]: Stopped systemd-journald.service. Aug 13 00:53:51.853089 kernel: fuse: init (API version 7.34) Aug 13 00:53:51.853103 systemd[1]: Starting systemd-journald.service... Aug 13 00:53:51.853116 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:53:51.853129 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:53:51.853143 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:53:51.853156 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:53:51.853170 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:53:51.853183 systemd[1]: Stopped verity-setup.service. Aug 13 00:53:51.853198 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:51.853211 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:53:51.853233 systemd-journald[981]: Journal started Aug 13 00:53:51.853277 systemd-journald[981]: Runtime Journal (/run/log/journal/ee5c078ef8484ed9b10bd918ad9df3de) is 6.0M, max 48.5M, 42.5M free. Aug 13 00:53:47.609000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:53:48.632000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:53:48.632000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:53:48.632000 audit: BPF prog-id=10 op=LOAD Aug 13 00:53:48.632000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:53:48.632000 audit: BPF prog-id=11 op=LOAD Aug 13 00:53:48.632000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:53:48.669000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:53:48.669000 audit[904]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:48.669000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:53:48.671000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:53:48.671000 audit[904]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079c9 a2=1ed a3=0 items=2 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:48.671000 audit: CWD cwd="/" Aug 13 00:53:48.671000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:48.671000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:48.671000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:53:51.672000 audit: BPF prog-id=12 op=LOAD Aug 13 00:53:51.672000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:53:51.672000 audit: BPF prog-id=13 op=LOAD Aug 13 00:53:51.672000 audit: BPF prog-id=14 op=LOAD Aug 13 00:53:51.672000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:53:51.672000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:53:51.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.702000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:53:51.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.831000 audit: BPF prog-id=15 op=LOAD Aug 13 00:53:51.832000 audit: BPF prog-id=16 op=LOAD Aug 13 00:53:51.832000 audit: BPF prog-id=17 op=LOAD Aug 13 00:53:51.832000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:53:51.832000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:53:51.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.850000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:53:51.850000 audit[981]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffebeb40c40 a2=4000 a3=7ffebeb40cdc items=0 ppid=1 pid=981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:51.850000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:53:48.668061 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:53:51.670614 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:53:48.668334 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:53:51.670625 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 00:53:48.668357 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:53:51.674595 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:53:48.668394 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 13 00:53:48.668407 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 13 00:53:48.668443 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 13 00:53:48.668458 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 13 00:53:48.668700 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 13 00:53:48.668744 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:53:48.668777 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:53:48.669472 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 13 00:53:48.669519 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 13 00:53:48.669543 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 13 00:53:51.855441 systemd[1]: Started systemd-journald.service. Aug 13 00:53:48.669563 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 13 00:53:48.669587 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 13 00:53:48.669607 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 13 00:53:51.301207 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:51Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:53:51.301554 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:51Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:53:51.301662 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:51Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:53:51.301855 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:51Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:53:51.301915 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:51Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 13 00:53:51.301989 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-08-13T00:53:51Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 13 00:53:51.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.856694 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:53:51.857518 systemd[1]: Mounted media.mount. Aug 13 00:53:51.858275 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:53:51.859101 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:53:51.859955 systemd[1]: Mounted tmp.mount. Aug 13 00:53:51.860855 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:53:51.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.861952 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:53:51.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.863079 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:53:51.863219 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:53:51.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.864222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:51.864342 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:51.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.865323 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:53:51.865444 systemd[1]: Finished modprobe@drm.service. Aug 13 00:53:51.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.866396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:51.866523 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:51.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.867523 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:53:51.867653 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:53:51.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.868611 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:51.868722 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:51.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.869734 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:53:51.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.870795 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:53:51.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.871884 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:53:51.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.873203 systemd[1]: Reached target network-pre.target. Aug 13 00:53:51.875000 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:53:51.876791 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:53:51.877690 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:53:51.879341 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:53:51.882221 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:53:51.883057 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:51.884084 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:53:51.885027 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:51.886107 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:53:51.888925 systemd-journald[981]: Time spent on flushing to /var/log/journal/ee5c078ef8484ed9b10bd918ad9df3de is 35.236ms for 1092 entries. Aug 13 00:53:51.888925 systemd-journald[981]: System Journal (/var/log/journal/ee5c078ef8484ed9b10bd918ad9df3de) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:53:52.503735 systemd-journald[981]: Received client request to flush runtime journal. Aug 13 00:53:51.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:52.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:52.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:51.888352 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:53:51.893680 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:53:51.906119 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:53:52.504537 udevadm[1007]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:53:51.907134 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:53:51.909081 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:53:51.998602 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:53:52.006200 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:53:52.125088 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:53:52.126071 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:53:52.504967 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:53:52.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:52.510027 kernel: kauditd_printk_skb: 92 callbacks suppressed Aug 13 00:53:52.510080 kernel: audit: type=1130 audit(1755046432.505:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.159810 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:53:53.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.163000 audit: BPF prog-id=18 op=LOAD Aug 13 00:53:53.165147 kernel: audit: type=1130 audit(1755046433.160:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.165219 kernel: audit: type=1334 audit(1755046433.163:129): prog-id=18 op=LOAD Aug 13 00:53:53.165240 kernel: audit: type=1334 audit(1755046433.164:130): prog-id=19 op=LOAD Aug 13 00:53:53.164000 audit: BPF prog-id=19 op=LOAD Aug 13 00:53:53.165892 systemd[1]: Starting systemd-udevd.service... Aug 13 00:53:53.166199 kernel: audit: type=1334 audit(1755046433.164:131): prog-id=7 op=UNLOAD Aug 13 00:53:53.166239 kernel: audit: type=1334 audit(1755046433.164:132): prog-id=8 op=UNLOAD Aug 13 00:53:53.164000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:53:53.164000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:53:53.183542 systemd-udevd[1010]: Using default interface naming scheme 'v252'. Aug 13 00:53:53.197532 systemd[1]: Started systemd-udevd.service. Aug 13 00:53:53.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.203165 kernel: audit: type=1130 audit(1755046433.198:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.203239 kernel: audit: type=1334 audit(1755046433.200:134): prog-id=20 op=LOAD Aug 13 00:53:53.200000 audit: BPF prog-id=20 op=LOAD Aug 13 00:53:53.204286 systemd[1]: Starting systemd-networkd.service... Aug 13 00:53:53.213784 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:53:53.212000 audit: BPF prog-id=21 op=LOAD Aug 13 00:53:53.212000 audit: BPF prog-id=22 op=LOAD Aug 13 00:53:53.216488 kernel: audit: type=1334 audit(1755046433.212:135): prog-id=21 op=LOAD Aug 13 00:53:53.216518 kernel: audit: type=1334 audit(1755046433.212:136): prog-id=22 op=LOAD Aug 13 00:53:53.212000 audit: BPF prog-id=23 op=LOAD Aug 13 00:53:53.246283 systemd[1]: Started systemd-userdbd.service. Aug 13 00:53:53.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.254098 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Aug 13 00:53:53.257547 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:53:53.322860 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 00:53:53.330644 systemd-networkd[1020]: lo: Link UP Aug 13 00:53:53.328000 audit[1025]: AVC avc: denied { confidentiality } for pid=1025 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:53:53.331053 systemd-networkd[1020]: lo: Gained carrier Aug 13 00:53:53.331594 systemd-networkd[1020]: Enumeration completed Aug 13 00:53:53.331784 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:53:53.331768 systemd[1]: Started systemd-networkd.service. Aug 13 00:53:53.331986 systemd-networkd[1020]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:53:53.333100 systemd-networkd[1020]: eth0: Link UP Aug 13 00:53:53.333184 systemd-networkd[1020]: eth0: Gained carrier Aug 13 00:53:53.328000 audit[1025]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e6ab00dee0 a1=338ac a2=7f719583ebc5 a3=5 items=110 ppid=1010 pid=1025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:53.328000 audit: CWD cwd="/" Aug 13 00:53:53.328000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=1 name=(null) inode=11033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=2 name=(null) inode=11033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=3 name=(null) inode=11034 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=4 name=(null) inode=11033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=5 name=(null) inode=11035 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=6 name=(null) inode=11033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=7 name=(null) inode=11036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=8 name=(null) inode=11036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=9 name=(null) inode=11037 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=10 name=(null) inode=11036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=11 name=(null) inode=11038 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=12 name=(null) inode=11036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=13 name=(null) inode=11039 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=14 name=(null) inode=11036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=15 name=(null) inode=11040 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=16 name=(null) inode=11036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=17 name=(null) inode=11041 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=18 name=(null) inode=11033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=19 name=(null) inode=11042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=20 name=(null) inode=11042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=21 name=(null) inode=11043 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=22 name=(null) inode=11042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=23 name=(null) inode=11044 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=24 name=(null) inode=11042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=25 name=(null) inode=11045 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=26 name=(null) inode=11042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=27 name=(null) inode=11046 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=28 name=(null) inode=11042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=29 name=(null) inode=11047 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=30 name=(null) inode=11033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=31 name=(null) inode=11048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=32 name=(null) inode=11048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=33 name=(null) inode=11049 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=34 name=(null) inode=11048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=35 name=(null) inode=11050 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=36 name=(null) inode=11048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=37 name=(null) inode=11051 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=38 name=(null) inode=11048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=39 name=(null) inode=11052 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=40 name=(null) inode=11048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=41 name=(null) inode=11053 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=42 name=(null) inode=11033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=43 name=(null) inode=11054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=44 name=(null) inode=11054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=45 name=(null) inode=11055 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=46 name=(null) inode=11054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=47 name=(null) inode=11056 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=48 name=(null) inode=11054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=49 name=(null) inode=11057 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=50 name=(null) inode=11054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=51 name=(null) inode=11058 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=52 name=(null) inode=11054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=53 name=(null) inode=11059 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=55 name=(null) inode=11060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=56 name=(null) inode=11060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=57 name=(null) inode=11061 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=58 name=(null) inode=11060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=59 name=(null) inode=11062 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=60 name=(null) inode=11060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=61 name=(null) inode=11063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=62 name=(null) inode=11063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=63 name=(null) inode=11064 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=64 name=(null) inode=11063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=65 name=(null) inode=11065 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=66 name=(null) inode=11063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=67 name=(null) inode=11066 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=68 name=(null) inode=11063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=69 name=(null) inode=11067 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=70 name=(null) inode=11063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=71 name=(null) inode=11068 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=72 name=(null) inode=11060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=73 name=(null) inode=11069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=74 name=(null) inode=11069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=75 name=(null) inode=11070 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=76 name=(null) inode=11069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=77 name=(null) inode=11071 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=78 name=(null) inode=11069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=79 name=(null) inode=11072 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=80 name=(null) inode=11069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=81 name=(null) inode=11073 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=82 name=(null) inode=11069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=83 name=(null) inode=11074 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=84 name=(null) inode=11060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=85 name=(null) inode=11075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=86 name=(null) inode=11075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=87 name=(null) inode=11076 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=88 name=(null) inode=11075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=89 name=(null) inode=11077 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=90 name=(null) inode=11075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=91 name=(null) inode=11078 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=92 name=(null) inode=11075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=93 name=(null) inode=11079 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=94 name=(null) inode=11075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=95 name=(null) inode=11080 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=96 name=(null) inode=11060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=97 name=(null) inode=11081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=98 name=(null) inode=11081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=99 name=(null) inode=11082 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=100 name=(null) inode=11081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=101 name=(null) inode=11083 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=102 name=(null) inode=11081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=103 name=(null) inode=11084 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=104 name=(null) inode=11081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=105 name=(null) inode=11085 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=106 name=(null) inode=11081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=107 name=(null) inode=11086 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PATH item=109 name=(null) inode=11087 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:53:53.328000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:53:53.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.372996 systemd-networkd[1020]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:53:53.377052 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:53:53.385932 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:53:53.385980 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:53:53.393654 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 00:53:53.393834 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:53:53.424298 kernel: kvm: Nested Virtualization enabled Aug 13 00:53:53.424401 kernel: SVM: kvm: Nested Paging enabled Aug 13 00:53:53.424416 kernel: SVM: Virtual VMLOAD VMSAVE supported Aug 13 00:53:53.424968 kernel: SVM: Virtual GIF supported Aug 13 00:53:53.442789 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:53:53.469209 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:53:53.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.471567 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:53:53.483625 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:53:53.513890 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:53:53.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.517892 systemd[1]: Reached target cryptsetup.target. Aug 13 00:53:53.520401 systemd[1]: Starting lvm2-activation.service... Aug 13 00:53:53.525334 lvm[1046]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:53:53.552846 systemd[1]: Finished lvm2-activation.service. Aug 13 00:53:53.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.554030 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:53:53.554967 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:53:53.555001 systemd[1]: Reached target local-fs.target. Aug 13 00:53:53.555842 systemd[1]: Reached target machines.target. Aug 13 00:53:53.558361 systemd[1]: Starting ldconfig.service... Aug 13 00:53:53.559601 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:53.559646 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:53.560788 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:53:53.562999 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:53:53.568031 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:53:53.570503 systemd[1]: Starting systemd-sysext.service... Aug 13 00:53:53.571320 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1048 (bootctl) Aug 13 00:53:53.572699 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:53:53.576307 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:53:53.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.589988 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:53:53.596049 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:53:53.596262 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:53:53.609882 kernel: loop0: detected capacity change from 0 to 224512 Aug 13 00:53:53.624106 systemd-fsck[1056]: fsck.fat 4.2 (2021-01-31) Aug 13 00:53:53.624106 systemd-fsck[1056]: /dev/vda1: 789 files, 119324/258078 clusters Aug 13 00:53:53.625849 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:53:53.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:53.665460 systemd[1]: Mounting boot.mount... Aug 13 00:53:53.934236 systemd[1]: Mounted boot.mount. Aug 13 00:53:53.944857 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:53:53.947798 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:53:53.949898 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:53:53.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.003025 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:53:54.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.008778 kernel: loop1: detected capacity change from 0 to 224512 Aug 13 00:53:54.013825 (sd-sysext)[1061]: Using extensions 'kubernetes'. Aug 13 00:53:54.014218 (sd-sysext)[1061]: Merged extensions into '/usr'. Aug 13 00:53:54.034192 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:54.035749 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:53:54.037140 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.038615 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:54.041329 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:54.043670 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:54.044789 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.044953 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:54.045119 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:54.048416 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:53:54.050115 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:54.050293 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:54.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.051859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:54.051989 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:54.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.053611 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:54.053751 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:54.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.055556 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:54.055700 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.056752 systemd[1]: Finished systemd-sysext.service. Aug 13 00:53:54.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.059351 systemd[1]: Starting ensure-sysext.service... Aug 13 00:53:54.061718 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:53:54.066685 systemd[1]: Reloading. Aug 13 00:53:54.074840 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:53:54.076551 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:53:54.080162 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:53:54.090502 ldconfig[1047]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:53:54.124554 /usr/lib/systemd/system-generators/torcx-generator[1090]: time="2025-08-13T00:53:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:53:54.124586 /usr/lib/systemd/system-generators/torcx-generator[1090]: time="2025-08-13T00:53:54Z" level=info msg="torcx already run" Aug 13 00:53:54.191704 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:53:54.191721 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:53:54.209371 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:53:54.265000 audit: BPF prog-id=24 op=LOAD Aug 13 00:53:54.265000 audit: BPF prog-id=25 op=LOAD Aug 13 00:53:54.265000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:53:54.265000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:53:54.268000 audit: BPF prog-id=26 op=LOAD Aug 13 00:53:54.268000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:53:54.268000 audit: BPF prog-id=27 op=LOAD Aug 13 00:53:54.268000 audit: BPF prog-id=28 op=LOAD Aug 13 00:53:54.268000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:53:54.268000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:53:54.270000 audit: BPF prog-id=29 op=LOAD Aug 13 00:53:54.270000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:53:54.270000 audit: BPF prog-id=30 op=LOAD Aug 13 00:53:54.270000 audit: BPF prog-id=31 op=LOAD Aug 13 00:53:54.270000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:53:54.270000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:53:54.271000 audit: BPF prog-id=32 op=LOAD Aug 13 00:53:54.271000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:53:54.273863 systemd[1]: Finished ldconfig.service. Aug 13 00:53:54.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.275992 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:53:54.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.279749 systemd[1]: Starting audit-rules.service... Aug 13 00:53:54.281750 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:53:54.283829 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:53:54.285000 audit: BPF prog-id=33 op=LOAD Aug 13 00:53:54.287129 systemd[1]: Starting systemd-resolved.service... Aug 13 00:53:54.287000 audit: BPF prog-id=34 op=LOAD Aug 13 00:53:54.289473 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:53:54.291853 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:53:54.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.293518 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:53:54.296721 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:53:54.296000 audit[1142]: SYSTEM_BOOT pid=1142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.300339 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:53:54.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.301933 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.303245 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:54.305171 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:54.307267 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:54.308151 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.308297 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:54.309596 systemd[1]: Starting systemd-update-done.service... Aug 13 00:53:54.310510 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:53:54.311783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:54.311911 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:54.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:53:54.313427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:54.313532 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:54.313000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:53:54.313000 audit[1155]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd182f8f60 a2=420 a3=0 items=0 ppid=1131 pid=1155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:53:54.313000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:53:54.314160 augenrules[1155]: No rules Aug 13 00:53:54.315035 systemd[1]: Finished audit-rules.service. Aug 13 00:53:54.316264 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:54.316369 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:54.317628 systemd[1]: Finished systemd-update-done.service. Aug 13 00:53:54.319096 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:54.319259 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.320484 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:53:54.323400 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.324729 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:54.326831 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:54.328683 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:54.329537 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.329638 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:54.329733 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:53:54.330639 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:54.330812 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:54.332320 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:54.332467 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:54.333696 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:54.333822 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:54.335138 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:54.335229 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.337665 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.339024 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:53:54.341108 systemd[1]: Starting modprobe@drm.service... Aug 13 00:53:54.342947 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:53:54.344957 systemd[1]: Starting modprobe@loop.service... Aug 13 00:53:54.345781 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.345951 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:54.346119 systemd-timesyncd[1141]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:53:54.346398 systemd-timesyncd[1141]: Initial clock synchronization to Wed 2025-08-13 00:53:54.250707 UTC. Aug 13 00:53:54.347214 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:53:54.348377 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:53:54.349559 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:53:54.351684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:53:54.352059 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:53:54.352361 systemd-resolved[1138]: Positive Trust Anchors: Aug 13 00:53:54.352373 systemd-resolved[1138]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:53:54.352397 systemd-resolved[1138]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:53:54.353633 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:53:54.353790 systemd[1]: Finished modprobe@drm.service. Aug 13 00:53:54.355194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:53:54.355304 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:53:54.356673 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:53:54.356812 systemd[1]: Finished modprobe@loop.service. Aug 13 00:53:54.360002 systemd[1]: Finished ensure-sysext.service. Aug 13 00:53:54.361083 systemd[1]: Reached target time-set.target. Aug 13 00:53:54.361106 systemd-resolved[1138]: Defaulting to hostname 'linux'. Aug 13 00:53:54.361934 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:53:54.361975 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.362544 systemd[1]: Started systemd-resolved.service. Aug 13 00:53:54.363433 systemd[1]: Reached target network.target. Aug 13 00:53:54.364284 systemd[1]: Reached target nss-lookup.target. Aug 13 00:53:54.365122 systemd[1]: Reached target sysinit.target. Aug 13 00:53:54.366014 systemd[1]: Started motdgen.path. Aug 13 00:53:54.366742 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:53:54.368113 systemd[1]: Started logrotate.timer. Aug 13 00:53:54.368937 systemd[1]: Started mdadm.timer. Aug 13 00:53:54.369616 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:53:54.370523 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:53:54.370552 systemd[1]: Reached target paths.target. Aug 13 00:53:54.371349 systemd[1]: Reached target timers.target. Aug 13 00:53:54.372449 systemd[1]: Listening on dbus.socket. Aug 13 00:53:54.374597 systemd[1]: Starting docker.socket... Aug 13 00:53:54.377719 systemd[1]: Listening on sshd.socket. Aug 13 00:53:54.378625 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:54.379007 systemd[1]: Listening on docker.socket. Aug 13 00:53:54.379853 systemd[1]: Reached target sockets.target. Aug 13 00:53:54.380622 systemd[1]: Reached target basic.target. Aug 13 00:53:54.381440 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.381463 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:53:54.382371 systemd[1]: Starting containerd.service... Aug 13 00:53:54.384072 systemd[1]: Starting dbus.service... Aug 13 00:53:54.385696 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:53:54.387537 systemd[1]: Starting extend-filesystems.service... Aug 13 00:53:54.388597 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:53:54.389501 systemd[1]: Starting motdgen.service... Aug 13 00:53:54.391241 systemd[1]: Starting prepare-helm.service... Aug 13 00:53:54.392114 jq[1174]: false Aug 13 00:53:54.392957 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:53:54.394703 systemd[1]: Starting sshd-keygen.service... Aug 13 00:53:54.416095 extend-filesystems[1175]: Found loop1 Aug 13 00:53:54.416095 extend-filesystems[1175]: Found sr0 Aug 13 00:53:54.417077 extend-filesystems[1175]: Found vda Aug 13 00:53:54.417077 extend-filesystems[1175]: Found vda1 Aug 13 00:53:54.417077 extend-filesystems[1175]: Found vda2 Aug 13 00:53:54.417077 extend-filesystems[1175]: Found vda3 Aug 13 00:53:54.417077 extend-filesystems[1175]: Found usr Aug 13 00:53:54.417077 extend-filesystems[1175]: Found vda4 Aug 13 00:53:54.417077 extend-filesystems[1175]: Found vda6 Aug 13 00:53:54.417077 extend-filesystems[1175]: Found vda7 Aug 13 00:53:54.417077 extend-filesystems[1175]: Found vda9 Aug 13 00:53:54.417077 extend-filesystems[1175]: Checking size of /dev/vda9 Aug 13 00:53:54.416311 systemd[1]: Starting systemd-logind.service... Aug 13 00:53:54.425005 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:53:54.425112 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:53:54.427390 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:53:54.427748 extend-filesystems[1175]: Resized partition /dev/vda9 Aug 13 00:53:54.430977 systemd[1]: Starting update-engine.service... Aug 13 00:53:54.433424 extend-filesystems[1196]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 00:53:54.448506 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:53:54.432966 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:53:54.436175 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:53:54.448685 jq[1198]: true Aug 13 00:53:54.436376 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:53:54.436654 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:53:54.436814 systemd[1]: Finished motdgen.service. Aug 13 00:53:54.449476 jq[1202]: true Aug 13 00:53:54.438451 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:53:54.438619 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:53:54.451065 tar[1201]: linux-amd64/LICENSE Aug 13 00:53:54.451065 tar[1201]: linux-amd64/helm Aug 13 00:53:54.485832 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:53:54.487613 dbus-daemon[1173]: [system] SELinux support is enabled Aug 13 00:53:54.487786 systemd[1]: Started dbus.service. Aug 13 00:53:54.490123 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:53:54.490141 systemd[1]: Reached target system-config.target. Aug 13 00:53:54.491018 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:53:54.491033 systemd[1]: Reached target user-config.target. Aug 13 00:53:54.516956 systemd-logind[1189]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 00:53:54.516971 systemd-logind[1189]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:53:54.517524 systemd-logind[1189]: New seat seat0. Aug 13 00:53:54.519613 systemd[1]: Started systemd-logind.service. Aug 13 00:53:54.519874 extend-filesystems[1196]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:53:54.519874 extend-filesystems[1196]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:53:54.519874 extend-filesystems[1196]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:53:54.524859 extend-filesystems[1175]: Resized filesystem in /dev/vda9 Aug 13 00:53:54.526063 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:53:54.526221 systemd[1]: Finished extend-filesystems.service. Aug 13 00:53:54.529798 update_engine[1197]: I0813 00:53:54.529492 1197 main.cc:92] Flatcar Update Engine starting Aug 13 00:53:54.593311 bash[1223]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:53:54.594188 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:53:54.597025 env[1203]: time="2025-08-13T00:53:54.596357447Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:53:54.599007 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:54.599083 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:53:54.600508 systemd[1]: Started update-engine.service. Aug 13 00:53:54.600609 update_engine[1197]: I0813 00:53:54.600584 1197 update_check_scheduler.cc:74] Next update check in 7m21s Aug 13 00:53:54.603772 systemd[1]: Started locksmithd.service. Aug 13 00:53:54.627386 env[1203]: time="2025-08-13T00:53:54.627329700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:53:54.627544 env[1203]: time="2025-08-13T00:53:54.627518965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:54.629140 env[1203]: time="2025-08-13T00:53:54.629102704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:53:54.629140 env[1203]: time="2025-08-13T00:53:54.629133813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:54.629366 env[1203]: time="2025-08-13T00:53:54.629337475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:53:54.629366 env[1203]: time="2025-08-13T00:53:54.629359356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:54.629449 env[1203]: time="2025-08-13T00:53:54.629371498Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:53:54.629449 env[1203]: time="2025-08-13T00:53:54.629380656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:54.629449 env[1203]: time="2025-08-13T00:53:54.629443303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:54.629700 env[1203]: time="2025-08-13T00:53:54.629671962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:53:54.629836 env[1203]: time="2025-08-13T00:53:54.629809530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:53:54.629836 env[1203]: time="2025-08-13T00:53:54.629830269Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:53:54.629923 env[1203]: time="2025-08-13T00:53:54.629881204Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:53:54.629923 env[1203]: time="2025-08-13T00:53:54.629892004Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:53:54.636424 env[1203]: time="2025-08-13T00:53:54.636393414Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:53:54.636472 env[1203]: time="2025-08-13T00:53:54.636425274Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:53:54.636472 env[1203]: time="2025-08-13T00:53:54.636438269Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:53:54.636512 env[1203]: time="2025-08-13T00:53:54.636477242Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:53:54.636512 env[1203]: time="2025-08-13T00:53:54.636494624Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:53:54.636512 env[1203]: time="2025-08-13T00:53:54.636508019Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:53:54.636594 env[1203]: time="2025-08-13T00:53:54.636520633Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:53:54.636594 env[1203]: time="2025-08-13T00:53:54.636534729Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:53:54.636594 env[1203]: time="2025-08-13T00:53:54.636547333Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:53:54.636658 env[1203]: time="2025-08-13T00:53:54.636614519Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:53:54.636658 env[1203]: time="2025-08-13T00:53:54.636634997Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:53:54.636658 env[1203]: time="2025-08-13T00:53:54.636646589Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:53:54.636780 env[1203]: time="2025-08-13T00:53:54.636744553Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:53:54.636897 env[1203]: time="2025-08-13T00:53:54.636871561Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:53:54.637146 env[1203]: time="2025-08-13T00:53:54.637120007Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:53:54.637192 env[1203]: time="2025-08-13T00:53:54.637162967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:53:54.637192 env[1203]: time="2025-08-13T00:53:54.637179238Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:53:54.637266 env[1203]: time="2025-08-13T00:53:54.637243839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:53:54.637266 env[1203]: time="2025-08-13T00:53:54.637263185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:53:54.637339 env[1203]: time="2025-08-13T00:53:54.637274877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:53:54.637339 env[1203]: time="2025-08-13T00:53:54.637287611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:53:54.637339 env[1203]: time="2025-08-13T00:53:54.637306787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:53:54.637339 env[1203]: time="2025-08-13T00:53:54.637321294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:53:54.637339 env[1203]: time="2025-08-13T00:53:54.637334058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:53:54.637469 env[1203]: time="2025-08-13T00:53:54.637347383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:53:54.637469 env[1203]: time="2025-08-13T00:53:54.637363543Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:53:54.637569 env[1203]: time="2025-08-13T00:53:54.637542469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:53:54.637569 env[1203]: time="2025-08-13T00:53:54.637566714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:53:54.637634 env[1203]: time="2025-08-13T00:53:54.637578276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:53:54.637634 env[1203]: time="2025-08-13T00:53:54.637589347Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:53:54.637634 env[1203]: time="2025-08-13T00:53:54.637602291Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:53:54.637634 env[1203]: time="2025-08-13T00:53:54.637614795Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:53:54.637731 env[1203]: time="2025-08-13T00:53:54.637648267Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:53:54.637731 env[1203]: time="2025-08-13T00:53:54.637686088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:53:54.637974 env[1203]: time="2025-08-13T00:53:54.637911290Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:53:54.637974 env[1203]: time="2025-08-13T00:53:54.637978927Z" level=info msg="Connect containerd service" Aug 13 00:53:54.638946 env[1203]: time="2025-08-13T00:53:54.638028480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:53:54.638946 env[1203]: time="2025-08-13T00:53:54.638899043Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:53:54.639136 env[1203]: time="2025-08-13T00:53:54.639087145Z" level=info msg="Start subscribing containerd event" Aug 13 00:53:54.639182 env[1203]: time="2025-08-13T00:53:54.639148500Z" level=info msg="Start recovering state" Aug 13 00:53:54.639205 env[1203]: time="2025-08-13T00:53:54.639179138Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:53:54.639228 env[1203]: time="2025-08-13T00:53:54.639212530Z" level=info msg="Start event monitor" Aug 13 00:53:54.639228 env[1203]: time="2025-08-13T00:53:54.639219153Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:53:54.639228 env[1203]: time="2025-08-13T00:53:54.639226316Z" level=info msg="Start snapshots syncer" Aug 13 00:53:54.639298 env[1203]: time="2025-08-13T00:53:54.639234421Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:53:54.639298 env[1203]: time="2025-08-13T00:53:54.639241655Z" level=info msg="Start streaming server" Aug 13 00:53:54.639352 systemd[1]: Started containerd.service. Aug 13 00:53:54.640046 env[1203]: time="2025-08-13T00:53:54.640019964Z" level=info msg="containerd successfully booted in 0.044543s" Aug 13 00:53:54.668998 locksmithd[1229]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:53:54.766019 sshd_keygen[1193]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:53:54.785993 systemd[1]: Finished sshd-keygen.service. Aug 13 00:53:54.788474 systemd[1]: Starting issuegen.service... Aug 13 00:53:54.793954 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:53:54.794140 systemd[1]: Finished issuegen.service. Aug 13 00:53:54.796919 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:53:54.806073 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:53:54.808668 systemd[1]: Started getty@tty1.service. Aug 13 00:53:54.811169 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 00:53:54.812350 systemd[1]: Reached target getty.target. Aug 13 00:53:54.859934 systemd-networkd[1020]: eth0: Gained IPv6LL Aug 13 00:53:54.861822 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:53:54.863144 systemd[1]: Reached target network-online.target. Aug 13 00:53:54.865650 systemd[1]: Starting kubelet.service... Aug 13 00:53:55.213193 tar[1201]: linux-amd64/README.md Aug 13 00:53:55.217281 systemd[1]: Finished prepare-helm.service. Aug 13 00:53:55.386431 systemd[1]: Created slice system-sshd.slice. Aug 13 00:53:55.389270 systemd[1]: Started sshd@0-10.0.0.21:22-10.0.0.1:45546.service. Aug 13 00:53:55.436237 sshd[1254]: Accepted publickey for core from 10.0.0.1 port 45546 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:53:55.440087 sshd[1254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:55.450889 systemd-logind[1189]: New session 1 of user core. Aug 13 00:53:55.452040 systemd[1]: Created slice user-500.slice. Aug 13 00:53:55.454501 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:53:55.471083 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:53:55.474124 systemd[1]: Starting user@500.service... Aug 13 00:53:55.477112 (systemd)[1257]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:55.569866 systemd[1257]: Queued start job for default target default.target. Aug 13 00:53:55.570406 systemd[1257]: Reached target paths.target. Aug 13 00:53:55.570435 systemd[1257]: Reached target sockets.target. Aug 13 00:53:55.570453 systemd[1257]: Reached target timers.target. Aug 13 00:53:55.570468 systemd[1257]: Reached target basic.target. Aug 13 00:53:55.570578 systemd[1]: Started user@500.service. Aug 13 00:53:55.571307 systemd[1257]: Reached target default.target. Aug 13 00:53:55.571370 systemd[1257]: Startup finished in 88ms. Aug 13 00:53:55.572423 systemd[1]: Started session-1.scope. Aug 13 00:53:55.628660 systemd[1]: Started sshd@1-10.0.0.21:22-10.0.0.1:45550.service. Aug 13 00:53:55.673173 sshd[1266]: Accepted publickey for core from 10.0.0.1 port 45550 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:53:55.675715 sshd[1266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:55.679554 systemd-logind[1189]: New session 2 of user core. Aug 13 00:53:55.680387 systemd[1]: Started session-2.scope. Aug 13 00:53:55.740931 sshd[1266]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:55.744163 systemd[1]: sshd@1-10.0.0.21:22-10.0.0.1:45550.service: Deactivated successfully. Aug 13 00:53:55.744696 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:53:55.745276 systemd-logind[1189]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:53:55.747048 systemd[1]: Started sshd@2-10.0.0.21:22-10.0.0.1:45562.service. Aug 13 00:53:55.749196 systemd-logind[1189]: Removed session 2. Aug 13 00:53:55.821565 sshd[1272]: Accepted publickey for core from 10.0.0.1 port 45562 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:53:55.824182 sshd[1272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:53:55.827730 systemd-logind[1189]: New session 3 of user core. Aug 13 00:53:55.828583 systemd[1]: Started session-3.scope. Aug 13 00:53:55.884461 sshd[1272]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:55.887286 systemd[1]: sshd@2-10.0.0.21:22-10.0.0.1:45562.service: Deactivated successfully. Aug 13 00:53:55.888135 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:53:55.888589 systemd-logind[1189]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:53:55.889277 systemd-logind[1189]: Removed session 3. Aug 13 00:53:56.283338 systemd[1]: Started kubelet.service. Aug 13 00:53:56.285226 systemd[1]: Reached target multi-user.target. Aug 13 00:53:56.288023 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:53:56.298250 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:53:56.298441 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:53:56.299790 systemd[1]: Startup finished in 905ms (kernel) + 4.627s (initrd) + 8.730s (userspace) = 14.262s. Aug 13 00:53:57.218518 kubelet[1280]: E0813 00:53:57.218435 1280 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:53:57.220403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:53:57.220551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:53:57.220889 systemd[1]: kubelet.service: Consumed 2.247s CPU time. Aug 13 00:54:05.825838 systemd[1]: Started sshd@3-10.0.0.21:22-10.0.0.1:43428.service. Aug 13 00:54:05.871915 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 43428 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:54:05.873114 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:05.876789 systemd-logind[1189]: New session 4 of user core. Aug 13 00:54:05.877830 systemd[1]: Started session-4.scope. Aug 13 00:54:05.931499 sshd[1290]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:05.934182 systemd[1]: sshd@3-10.0.0.21:22-10.0.0.1:43428.service: Deactivated successfully. Aug 13 00:54:05.934779 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:54:05.935288 systemd-logind[1189]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:54:05.936431 systemd[1]: Started sshd@4-10.0.0.21:22-10.0.0.1:43440.service. Aug 13 00:54:05.937187 systemd-logind[1189]: Removed session 4. Aug 13 00:54:05.976133 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 43440 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:54:05.977398 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:05.980953 systemd-logind[1189]: New session 5 of user core. Aug 13 00:54:05.981997 systemd[1]: Started session-5.scope. Aug 13 00:54:06.031252 sshd[1296]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:06.034522 systemd[1]: Started sshd@5-10.0.0.21:22-10.0.0.1:43454.service. Aug 13 00:54:06.035059 systemd[1]: sshd@4-10.0.0.21:22-10.0.0.1:43440.service: Deactivated successfully. Aug 13 00:54:06.035651 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:54:06.036260 systemd-logind[1189]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:54:06.037268 systemd-logind[1189]: Removed session 5. Aug 13 00:54:06.074164 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 43454 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:54:06.075716 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:06.079475 systemd-logind[1189]: New session 6 of user core. Aug 13 00:54:06.080639 systemd[1]: Started session-6.scope. Aug 13 00:54:06.134972 sshd[1301]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:06.137660 systemd[1]: sshd@5-10.0.0.21:22-10.0.0.1:43454.service: Deactivated successfully. Aug 13 00:54:06.138236 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:54:06.138839 systemd-logind[1189]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:54:06.139931 systemd[1]: Started sshd@6-10.0.0.21:22-10.0.0.1:43464.service. Aug 13 00:54:06.140716 systemd-logind[1189]: Removed session 6. Aug 13 00:54:06.178836 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 43464 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:54:06.179977 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:06.183521 systemd-logind[1189]: New session 7 of user core. Aug 13 00:54:06.184421 systemd[1]: Started session-7.scope. Aug 13 00:54:06.334284 sudo[1311]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:54:06.334557 sudo[1311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:54:06.361925 systemd[1]: Starting docker.service... Aug 13 00:54:06.421469 env[1323]: time="2025-08-13T00:54:06.421392061Z" level=info msg="Starting up" Aug 13 00:54:06.423367 env[1323]: time="2025-08-13T00:54:06.423325287Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:54:06.423367 env[1323]: time="2025-08-13T00:54:06.423344790Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:54:06.423476 env[1323]: time="2025-08-13T00:54:06.423368839Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:54:06.423476 env[1323]: time="2025-08-13T00:54:06.423389332Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:54:06.425704 env[1323]: time="2025-08-13T00:54:06.425658965Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:54:06.425704 env[1323]: time="2025-08-13T00:54:06.425690917Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:54:06.425829 env[1323]: time="2025-08-13T00:54:06.425712658Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:54:06.425829 env[1323]: time="2025-08-13T00:54:06.425724288Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:54:06.431447 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport385952016-merged.mount: Deactivated successfully. Aug 13 00:54:06.657612 env[1323]: time="2025-08-13T00:54:06.657480612Z" level=info msg="Loading containers: start." Aug 13 00:54:06.788789 kernel: Initializing XFRM netlink socket Aug 13 00:54:06.818740 env[1323]: time="2025-08-13T00:54:06.818686033Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:54:06.877228 systemd-networkd[1020]: docker0: Link UP Aug 13 00:54:06.894954 env[1323]: time="2025-08-13T00:54:06.894914233Z" level=info msg="Loading containers: done." Aug 13 00:54:06.911517 env[1323]: time="2025-08-13T00:54:06.911416063Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:54:06.911700 env[1323]: time="2025-08-13T00:54:06.911593239Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:54:06.911700 env[1323]: time="2025-08-13T00:54:06.911671461Z" level=info msg="Daemon has completed initialization" Aug 13 00:54:06.931740 systemd[1]: Started docker.service. Aug 13 00:54:06.941611 env[1323]: time="2025-08-13T00:54:06.941528731Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:54:07.471438 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:54:07.471639 systemd[1]: Stopped kubelet.service. Aug 13 00:54:07.471684 systemd[1]: kubelet.service: Consumed 2.247s CPU time. Aug 13 00:54:07.473398 systemd[1]: Starting kubelet.service... Aug 13 00:54:07.663789 systemd[1]: Started kubelet.service. Aug 13 00:54:07.744036 kubelet[1454]: E0813 00:54:07.743883 1454 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:54:07.746785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:54:07.746920 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:54:07.922989 env[1203]: time="2025-08-13T00:54:07.922913692Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:54:11.005433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1878285725.mount: Deactivated successfully. Aug 13 00:54:13.656531 env[1203]: time="2025-08-13T00:54:13.656447791Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:13.658915 env[1203]: time="2025-08-13T00:54:13.658856594Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:13.661638 env[1203]: time="2025-08-13T00:54:13.661605309Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:13.664719 env[1203]: time="2025-08-13T00:54:13.664681926Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:13.665430 env[1203]: time="2025-08-13T00:54:13.665377762Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 00:54:13.666597 env[1203]: time="2025-08-13T00:54:13.666570944Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:54:16.187242 env[1203]: time="2025-08-13T00:54:16.187137831Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:16.188873 env[1203]: time="2025-08-13T00:54:16.188837772Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:16.191014 env[1203]: time="2025-08-13T00:54:16.190983369Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:16.259458 env[1203]: time="2025-08-13T00:54:16.259358078Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:16.260532 env[1203]: time="2025-08-13T00:54:16.260467619Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 00:54:16.261367 env[1203]: time="2025-08-13T00:54:16.261339395Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:54:17.998144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:54:17.998514 systemd[1]: Stopped kubelet.service. Aug 13 00:54:18.001166 systemd[1]: Starting kubelet.service... Aug 13 00:54:18.103852 systemd[1]: Started kubelet.service. Aug 13 00:54:18.315111 kubelet[1468]: E0813 00:54:18.314960 1468 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:54:18.317016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:54:18.317193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:54:20.537688 env[1203]: time="2025-08-13T00:54:20.537573531Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:20.540503 env[1203]: time="2025-08-13T00:54:20.540442387Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:20.543980 env[1203]: time="2025-08-13T00:54:20.543923324Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:20.547078 env[1203]: time="2025-08-13T00:54:20.546999753Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:20.548394 env[1203]: time="2025-08-13T00:54:20.548319299Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 00:54:20.549492 env[1203]: time="2025-08-13T00:54:20.549455990Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:54:21.772411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331881884.mount: Deactivated successfully. Aug 13 00:54:22.717690 env[1203]: time="2025-08-13T00:54:22.717618774Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:22.722970 env[1203]: time="2025-08-13T00:54:22.722918897Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:22.724627 env[1203]: time="2025-08-13T00:54:22.724580595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:22.726388 env[1203]: time="2025-08-13T00:54:22.726345804Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:22.726830 env[1203]: time="2025-08-13T00:54:22.726780088Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 00:54:22.727445 env[1203]: time="2025-08-13T00:54:22.727413120Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:54:23.732051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1945061645.mount: Deactivated successfully. Aug 13 00:54:25.581683 env[1203]: time="2025-08-13T00:54:25.581616534Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.583593 env[1203]: time="2025-08-13T00:54:25.583533062Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.585332 env[1203]: time="2025-08-13T00:54:25.585301234Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.587082 env[1203]: time="2025-08-13T00:54:25.587046959Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:25.587772 env[1203]: time="2025-08-13T00:54:25.587725765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:54:25.588343 env[1203]: time="2025-08-13T00:54:25.588315113Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:54:26.729770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2752623706.mount: Deactivated successfully. Aug 13 00:54:26.737103 env[1203]: time="2025-08-13T00:54:26.737025562Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:26.739485 env[1203]: time="2025-08-13T00:54:26.739425499Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:26.741331 env[1203]: time="2025-08-13T00:54:26.741270520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:26.742712 env[1203]: time="2025-08-13T00:54:26.742652690Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:26.743125 env[1203]: time="2025-08-13T00:54:26.743085971Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:54:26.743897 env[1203]: time="2025-08-13T00:54:26.743844651Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:54:27.261908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513477668.mount: Deactivated successfully. Aug 13 00:54:28.391289 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:54:28.391529 systemd[1]: Stopped kubelet.service. Aug 13 00:54:28.393737 systemd[1]: Starting kubelet.service... Aug 13 00:54:28.504898 systemd[1]: Started kubelet.service. Aug 13 00:54:29.703463 kubelet[1479]: E0813 00:54:29.703371 1479 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:54:29.705705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:54:29.705880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:54:32.267247 env[1203]: time="2025-08-13T00:54:32.267168586Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:32.269133 env[1203]: time="2025-08-13T00:54:32.269100618Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:32.270940 env[1203]: time="2025-08-13T00:54:32.270914096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:32.273118 env[1203]: time="2025-08-13T00:54:32.273077342Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:32.273972 env[1203]: time="2025-08-13T00:54:32.273921659Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:54:34.837533 systemd[1]: Stopped kubelet.service. Aug 13 00:54:34.839694 systemd[1]: Starting kubelet.service... Aug 13 00:54:34.862656 systemd[1]: Reloading. Aug 13 00:54:34.940849 /usr/lib/systemd/system-generators/torcx-generator[1533]: time="2025-08-13T00:54:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:34.941327 /usr/lib/systemd/system-generators/torcx-generator[1533]: time="2025-08-13T00:54:34Z" level=info msg="torcx already run" Aug 13 00:54:35.751366 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:35.751389 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:35.775848 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:35.881531 systemd[1]: Started kubelet.service. Aug 13 00:54:35.884261 systemd[1]: Stopping kubelet.service... Aug 13 00:54:35.884543 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:54:35.884707 systemd[1]: Stopped kubelet.service. Aug 13 00:54:35.886067 systemd[1]: Starting kubelet.service... Aug 13 00:54:35.983561 systemd[1]: Started kubelet.service. Aug 13 00:54:36.128556 kubelet[1583]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:36.128997 kubelet[1583]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:54:36.128997 kubelet[1583]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:36.129333 kubelet[1583]: I0813 00:54:36.129090 1583 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:54:36.377512 kubelet[1583]: I0813 00:54:36.377452 1583 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:54:36.377512 kubelet[1583]: I0813 00:54:36.377489 1583 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:54:36.377825 kubelet[1583]: I0813 00:54:36.377803 1583 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:54:36.401043 kubelet[1583]: E0813 00:54:36.400997 1583 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:36.401952 kubelet[1583]: I0813 00:54:36.401914 1583 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:54:36.409882 kubelet[1583]: E0813 00:54:36.409838 1583 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:54:36.409882 kubelet[1583]: I0813 00:54:36.409873 1583 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:54:36.416269 kubelet[1583]: I0813 00:54:36.416227 1583 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:54:36.417889 kubelet[1583]: I0813 00:54:36.417845 1583 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:54:36.418121 kubelet[1583]: I0813 00:54:36.417883 1583 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:54:36.418121 kubelet[1583]: I0813 00:54:36.418120 1583 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:54:36.418313 kubelet[1583]: I0813 00:54:36.418130 1583 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:54:36.418313 kubelet[1583]: I0813 00:54:36.418308 1583 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:36.420799 kubelet[1583]: I0813 00:54:36.420771 1583 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:54:36.420870 kubelet[1583]: I0813 00:54:36.420805 1583 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:54:36.420870 kubelet[1583]: I0813 00:54:36.420832 1583 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:54:36.420870 kubelet[1583]: I0813 00:54:36.420846 1583 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:54:36.431525 kubelet[1583]: W0813 00:54:36.431454 1583 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:54:36.431525 kubelet[1583]: E0813 00:54:36.431520 1583 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:36.439057 kubelet[1583]: W0813 00:54:36.439007 1583 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:54:36.439147 kubelet[1583]: E0813 00:54:36.439062 1583 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:36.440985 kubelet[1583]: I0813 00:54:36.440929 1583 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:54:36.441599 kubelet[1583]: I0813 00:54:36.441573 1583 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:54:36.444345 kubelet[1583]: W0813 00:54:36.444306 1583 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:54:36.446824 kubelet[1583]: I0813 00:54:36.446782 1583 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:54:36.446886 kubelet[1583]: I0813 00:54:36.446830 1583 server.go:1287] "Started kubelet" Aug 13 00:54:36.448720 kubelet[1583]: I0813 00:54:36.448427 1583 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:54:36.450151 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 00:54:36.450390 kubelet[1583]: I0813 00:54:36.450352 1583 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:54:36.451912 kubelet[1583]: I0813 00:54:36.451878 1583 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:54:36.451994 kubelet[1583]: I0813 00:54:36.451898 1583 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:54:36.452178 kubelet[1583]: I0813 00:54:36.452153 1583 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:54:36.453175 kubelet[1583]: I0813 00:54:36.453138 1583 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:54:36.455397 kubelet[1583]: E0813 00:54:36.455344 1583 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:36.455481 kubelet[1583]: I0813 00:54:36.455407 1583 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:54:36.455647 kubelet[1583]: I0813 00:54:36.455615 1583 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:54:36.455779 kubelet[1583]: I0813 00:54:36.455731 1583 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:54:36.456830 kubelet[1583]: W0813 00:54:36.456485 1583 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:54:36.456830 kubelet[1583]: E0813 00:54:36.456554 1583 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:36.456830 kubelet[1583]: E0813 00:54:36.453582 1583 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2d78c0e99e69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:54:36.446801513 +0000 UTC m=+0.362087304,LastTimestamp:2025-08-13 00:54:36.446801513 +0000 UTC m=+0.362087304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:54:36.457053 kubelet[1583]: E0813 00:54:36.457028 1583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="200ms" Aug 13 00:54:36.457472 kubelet[1583]: I0813 00:54:36.457442 1583 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:54:36.457987 kubelet[1583]: E0813 00:54:36.457915 1583 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:54:36.458363 kubelet[1583]: I0813 00:54:36.458327 1583 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:54:36.458363 kubelet[1583]: I0813 00:54:36.458355 1583 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:54:36.470450 kubelet[1583]: I0813 00:54:36.470393 1583 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:54:36.471290 kubelet[1583]: I0813 00:54:36.471260 1583 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:54:36.471360 kubelet[1583]: I0813 00:54:36.471295 1583 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:54:36.471360 kubelet[1583]: I0813 00:54:36.471328 1583 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:54:36.471360 kubelet[1583]: I0813 00:54:36.471337 1583 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:54:36.471472 kubelet[1583]: E0813 00:54:36.471399 1583 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:54:36.475608 kubelet[1583]: W0813 00:54:36.475557 1583 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:54:36.475693 kubelet[1583]: E0813 00:54:36.475616 1583 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:36.477737 kubelet[1583]: I0813 00:54:36.477707 1583 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:54:36.477737 kubelet[1583]: I0813 00:54:36.477729 1583 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:54:36.477874 kubelet[1583]: I0813 00:54:36.477750 1583 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:36.556283 kubelet[1583]: E0813 00:54:36.556206 1583 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:36.572529 kubelet[1583]: E0813 00:54:36.572483 1583 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:54:36.657012 kubelet[1583]: E0813 00:54:36.656797 1583 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:36.658502 kubelet[1583]: E0813 00:54:36.658428 1583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="400ms" Aug 13 00:54:36.685356 kubelet[1583]: I0813 00:54:36.685295 1583 policy_none.go:49] "None policy: Start" Aug 13 00:54:36.685356 kubelet[1583]: I0813 00:54:36.685357 1583 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:54:36.685601 kubelet[1583]: I0813 00:54:36.685407 1583 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:54:36.693416 systemd[1]: Created slice kubepods.slice. Aug 13 00:54:36.698821 systemd[1]: Created slice kubepods-burstable.slice. Aug 13 00:54:36.701523 systemd[1]: Created slice kubepods-besteffort.slice. Aug 13 00:54:36.713400 kubelet[1583]: I0813 00:54:36.713222 1583 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:54:36.713597 kubelet[1583]: I0813 00:54:36.713558 1583 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:54:36.713652 kubelet[1583]: I0813 00:54:36.713575 1583 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:54:36.714087 kubelet[1583]: I0813 00:54:36.714019 1583 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:54:36.715984 kubelet[1583]: E0813 00:54:36.715918 1583 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:54:36.716093 kubelet[1583]: E0813 00:54:36.716022 1583 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:54:36.782115 systemd[1]: Created slice kubepods-burstable-podede828ba74dc04ff050eba500199ec0b.slice. Aug 13 00:54:36.791592 kubelet[1583]: E0813 00:54:36.791518 1583 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:36.794438 systemd[1]: Created slice kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice. Aug 13 00:54:36.796505 kubelet[1583]: E0813 00:54:36.796462 1583 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:36.798811 systemd[1]: Created slice kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice. Aug 13 00:54:36.801007 kubelet[1583]: E0813 00:54:36.800958 1583 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:36.815735 kubelet[1583]: I0813 00:54:36.815673 1583 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:54:36.816373 kubelet[1583]: E0813 00:54:36.816304 1583 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Aug 13 00:54:36.857746 kubelet[1583]: I0813 00:54:36.857668 1583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ede828ba74dc04ff050eba500199ec0b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ede828ba74dc04ff050eba500199ec0b\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:36.857746 kubelet[1583]: I0813 00:54:36.857723 1583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:36.857746 kubelet[1583]: I0813 00:54:36.857749 1583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:36.858101 kubelet[1583]: I0813 00:54:36.857807 1583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:36.858101 kubelet[1583]: I0813 00:54:36.857846 1583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:36.858101 kubelet[1583]: I0813 00:54:36.857872 1583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:36.858101 kubelet[1583]: I0813 00:54:36.857900 1583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:36.858101 kubelet[1583]: I0813 00:54:36.857920 1583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ede828ba74dc04ff050eba500199ec0b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ede828ba74dc04ff050eba500199ec0b\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:36.858338 kubelet[1583]: I0813 00:54:36.857959 1583 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ede828ba74dc04ff050eba500199ec0b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ede828ba74dc04ff050eba500199ec0b\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:37.018879 kubelet[1583]: I0813 00:54:37.018693 1583 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:54:37.019864 kubelet[1583]: E0813 00:54:37.019799 1583 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Aug 13 00:54:37.059406 kubelet[1583]: E0813 00:54:37.059314 1583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="800ms" Aug 13 00:54:37.092783 kubelet[1583]: E0813 00:54:37.092662 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:37.093837 env[1203]: time="2025-08-13T00:54:37.093744561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ede828ba74dc04ff050eba500199ec0b,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:37.097845 kubelet[1583]: E0813 00:54:37.097820 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:37.098415 env[1203]: time="2025-08-13T00:54:37.098345825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:37.101569 kubelet[1583]: E0813 00:54:37.101536 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:37.102782 env[1203]: time="2025-08-13T00:54:37.102307076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:37.266928 kubelet[1583]: W0813 00:54:37.266841 1583 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:54:37.267286 kubelet[1583]: E0813 00:54:37.266926 1583 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:37.421810 kubelet[1583]: I0813 00:54:37.421774 1583 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:54:37.422208 kubelet[1583]: E0813 00:54:37.422181 1583 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Aug 13 00:54:37.552424 kubelet[1583]: W0813 00:54:37.552329 1583 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:54:37.552424 kubelet[1583]: E0813 00:54:37.552415 1583 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:37.703873 kubelet[1583]: W0813 00:54:37.703668 1583 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:54:37.703873 kubelet[1583]: E0813 00:54:37.703794 1583 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:37.824371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592574279.mount: Deactivated successfully. Aug 13 00:54:37.831039 env[1203]: time="2025-08-13T00:54:37.830984997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.834839 env[1203]: time="2025-08-13T00:54:37.834807938Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.836097 env[1203]: time="2025-08-13T00:54:37.836076290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.837392 env[1203]: time="2025-08-13T00:54:37.837349422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.839034 env[1203]: time="2025-08-13T00:54:37.839002948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.840159 env[1203]: time="2025-08-13T00:54:37.840138321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.841261 env[1203]: time="2025-08-13T00:54:37.841228729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.842723 env[1203]: time="2025-08-13T00:54:37.842681528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.844186 env[1203]: time="2025-08-13T00:54:37.844160326Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.845996 env[1203]: time="2025-08-13T00:54:37.845969415Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.847465 env[1203]: time="2025-08-13T00:54:37.847437533Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.849086 env[1203]: time="2025-08-13T00:54:37.849042146Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:37.860690 kubelet[1583]: E0813 00:54:37.860654 1583 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="1.6s" Aug 13 00:54:37.882959 env[1203]: time="2025-08-13T00:54:37.882878687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:37.882959 env[1203]: time="2025-08-13T00:54:37.882918763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:37.883173 env[1203]: time="2025-08-13T00:54:37.882936786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:37.883173 env[1203]: time="2025-08-13T00:54:37.883056030Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/462fbb560ed25163024b1654fd7b6989f01e154eeb2fd5484e1fd44e1913afae pid=1624 runtime=io.containerd.runc.v2 Aug 13 00:54:37.900996 env[1203]: time="2025-08-13T00:54:37.900846790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:37.900996 env[1203]: time="2025-08-13T00:54:37.900915148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:37.900996 env[1203]: time="2025-08-13T00:54:37.900940415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:37.901323 env[1203]: time="2025-08-13T00:54:37.901254134Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c2530a86671310f0bc2b9967e0de4709e58d3eb5be5d57e8c6406789dd4c6bd pid=1644 runtime=io.containerd.runc.v2 Aug 13 00:54:37.909502 systemd[1]: Started cri-containerd-462fbb560ed25163024b1654fd7b6989f01e154eeb2fd5484e1fd44e1913afae.scope. Aug 13 00:54:37.923165 env[1203]: time="2025-08-13T00:54:37.922929501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:37.923165 env[1203]: time="2025-08-13T00:54:37.922979925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:37.923165 env[1203]: time="2025-08-13T00:54:37.922993511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:37.923545 env[1203]: time="2025-08-13T00:54:37.923483341Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d63818581b8f68cb6cc4c9f1003dc0cdd359a1834635ad59ae4c25869a975fc pid=1663 runtime=io.containerd.runc.v2 Aug 13 00:54:37.925875 systemd[1]: Started cri-containerd-5c2530a86671310f0bc2b9967e0de4709e58d3eb5be5d57e8c6406789dd4c6bd.scope. Aug 13 00:54:37.959770 systemd[1]: Started cri-containerd-4d63818581b8f68cb6cc4c9f1003dc0cdd359a1834635ad59ae4c25869a975fc.scope. Aug 13 00:54:37.988117 kubelet[1583]: W0813 00:54:37.988066 1583 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Aug 13 00:54:37.988117 kubelet[1583]: E0813 00:54:37.988116 1583 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:38.016702 env[1203]: time="2025-08-13T00:54:38.016631224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ede828ba74dc04ff050eba500199ec0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"462fbb560ed25163024b1654fd7b6989f01e154eeb2fd5484e1fd44e1913afae\"" Aug 13 00:54:38.017889 kubelet[1583]: E0813 00:54:38.017850 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:38.021057 env[1203]: time="2025-08-13T00:54:38.021020538Z" level=info msg="CreateContainer within sandbox \"462fbb560ed25163024b1654fd7b6989f01e154eeb2fd5484e1fd44e1913afae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:54:38.037013 env[1203]: time="2025-08-13T00:54:38.036969344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c2530a86671310f0bc2b9967e0de4709e58d3eb5be5d57e8c6406789dd4c6bd\"" Aug 13 00:54:38.038724 kubelet[1583]: E0813 00:54:38.038690 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:38.040347 env[1203]: time="2025-08-13T00:54:38.040319547Z" level=info msg="CreateContainer within sandbox \"5c2530a86671310f0bc2b9967e0de4709e58d3eb5be5d57e8c6406789dd4c6bd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:54:38.042277 env[1203]: time="2025-08-13T00:54:38.042253790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d63818581b8f68cb6cc4c9f1003dc0cdd359a1834635ad59ae4c25869a975fc\"" Aug 13 00:54:38.042925 kubelet[1583]: E0813 00:54:38.042894 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:38.044461 env[1203]: time="2025-08-13T00:54:38.044364805Z" level=info msg="CreateContainer within sandbox \"4d63818581b8f68cb6cc4c9f1003dc0cdd359a1834635ad59ae4c25869a975fc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:54:38.048256 env[1203]: time="2025-08-13T00:54:38.048201582Z" level=info msg="CreateContainer within sandbox \"462fbb560ed25163024b1654fd7b6989f01e154eeb2fd5484e1fd44e1913afae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a6c68d389b2c232057395955232b5f753208a3ed2dac5b7f6066d34826295a9c\"" Aug 13 00:54:38.048895 env[1203]: time="2025-08-13T00:54:38.048861912Z" level=info msg="StartContainer for \"a6c68d389b2c232057395955232b5f753208a3ed2dac5b7f6066d34826295a9c\"" Aug 13 00:54:38.065882 env[1203]: time="2025-08-13T00:54:38.065810145Z" level=info msg="CreateContainer within sandbox \"4d63818581b8f68cb6cc4c9f1003dc0cdd359a1834635ad59ae4c25869a975fc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"57f1eb592c0d72457f601625761464c00f75b2e61a65b497462c50cd179ec8b7\"" Aug 13 00:54:38.065972 systemd[1]: Started cri-containerd-a6c68d389b2c232057395955232b5f753208a3ed2dac5b7f6066d34826295a9c.scope. Aug 13 00:54:38.066722 env[1203]: time="2025-08-13T00:54:38.066693354Z" level=info msg="StartContainer for \"57f1eb592c0d72457f601625761464c00f75b2e61a65b497462c50cd179ec8b7\"" Aug 13 00:54:38.068900 env[1203]: time="2025-08-13T00:54:38.068871746Z" level=info msg="CreateContainer within sandbox \"5c2530a86671310f0bc2b9967e0de4709e58d3eb5be5d57e8c6406789dd4c6bd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bed954064d4db4bc6c782ca7958d2c0e77fc8880e73841e02715e8f91453fdb5\"" Aug 13 00:54:38.069430 env[1203]: time="2025-08-13T00:54:38.069407712Z" level=info msg="StartContainer for \"bed954064d4db4bc6c782ca7958d2c0e77fc8880e73841e02715e8f91453fdb5\"" Aug 13 00:54:38.087736 systemd[1]: Started cri-containerd-bed954064d4db4bc6c782ca7958d2c0e77fc8880e73841e02715e8f91453fdb5.scope. Aug 13 00:54:38.090992 systemd[1]: Started cri-containerd-57f1eb592c0d72457f601625761464c00f75b2e61a65b497462c50cd179ec8b7.scope. Aug 13 00:54:38.113452 env[1203]: time="2025-08-13T00:54:38.113400423Z" level=info msg="StartContainer for \"a6c68d389b2c232057395955232b5f753208a3ed2dac5b7f6066d34826295a9c\" returns successfully" Aug 13 00:54:38.154893 env[1203]: time="2025-08-13T00:54:38.154842793Z" level=info msg="StartContainer for \"bed954064d4db4bc6c782ca7958d2c0e77fc8880e73841e02715e8f91453fdb5\" returns successfully" Aug 13 00:54:38.155205 env[1203]: time="2025-08-13T00:54:38.154962349Z" level=info msg="StartContainer for \"57f1eb592c0d72457f601625761464c00f75b2e61a65b497462c50cd179ec8b7\" returns successfully" Aug 13 00:54:38.225129 kubelet[1583]: I0813 00:54:38.224997 1583 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:54:38.481725 kubelet[1583]: E0813 00:54:38.481577 1583 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:38.481725 kubelet[1583]: E0813 00:54:38.481696 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:38.483393 kubelet[1583]: E0813 00:54:38.483362 1583 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:38.483465 kubelet[1583]: E0813 00:54:38.483442 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:38.484956 kubelet[1583]: E0813 00:54:38.484933 1583 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:38.485097 kubelet[1583]: E0813 00:54:38.485016 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:39.488195 kubelet[1583]: E0813 00:54:39.488150 1583 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:39.488685 kubelet[1583]: E0813 00:54:39.488292 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:39.488948 kubelet[1583]: E0813 00:54:39.488904 1583 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:39.489171 kubelet[1583]: E0813 00:54:39.489143 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:39.489835 kubelet[1583]: E0813 00:54:39.489749 1583 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:54:39.490221 kubelet[1583]: E0813 00:54:39.490046 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:39.989781 kubelet[1583]: E0813 00:54:39.989670 1583 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:54:40.039357 update_engine[1197]: I0813 00:54:40.039291 1197 update_attempter.cc:509] Updating boot flags... Aug 13 00:54:40.139210 kubelet[1583]: E0813 00:54:40.139022 1583 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.185b2d78c0e99e69 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:54:36.446801513 +0000 UTC m=+0.362087304,LastTimestamp:2025-08-13 00:54:36.446801513 +0000 UTC m=+0.362087304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:54:40.139950 kubelet[1583]: I0813 00:54:40.139916 1583 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:54:40.140024 kubelet[1583]: E0813 00:54:40.139961 1583 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 00:54:40.156692 kubelet[1583]: I0813 00:54:40.156652 1583 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:40.197834 kubelet[1583]: E0813 00:54:40.194045 1583 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.185b2d78c192b854 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:54:36.457883732 +0000 UTC m=+0.373169523,LastTimestamp:2025-08-13 00:54:36.457883732 +0000 UTC m=+0.373169523,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:54:40.212165 kubelet[1583]: E0813 00:54:40.212104 1583 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:40.212165 kubelet[1583]: I0813 00:54:40.212159 1583 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:40.214367 kubelet[1583]: E0813 00:54:40.214322 1583 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:40.214367 kubelet[1583]: I0813 00:54:40.214346 1583 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:40.215605 kubelet[1583]: E0813 00:54:40.215573 1583 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:40.435183 kubelet[1583]: I0813 00:54:40.435150 1583 apiserver.go:52] "Watching apiserver" Aug 13 00:54:40.455954 kubelet[1583]: I0813 00:54:40.455906 1583 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:54:40.488552 kubelet[1583]: I0813 00:54:40.488501 1583 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:40.489048 kubelet[1583]: I0813 00:54:40.488516 1583 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:40.491169 kubelet[1583]: E0813 00:54:40.491139 1583 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:40.491319 kubelet[1583]: E0813 00:54:40.491300 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:40.491684 kubelet[1583]: E0813 00:54:40.491655 1583 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:40.491895 kubelet[1583]: E0813 00:54:40.491750 1583 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:42.939244 systemd[1]: Reloading. Aug 13 00:54:43.032222 /usr/lib/systemd/system-generators/torcx-generator[1891]: time="2025-08-13T00:54:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:43.032631 /usr/lib/systemd/system-generators/torcx-generator[1891]: time="2025-08-13T00:54:43Z" level=info msg="torcx already run" Aug 13 00:54:43.098646 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:43.098663 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:43.116265 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:43.210063 systemd[1]: Stopping kubelet.service... Aug 13 00:54:43.233422 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:54:43.233681 systemd[1]: Stopped kubelet.service. Aug 13 00:54:43.235921 systemd[1]: Starting kubelet.service... Aug 13 00:54:43.344204 systemd[1]: Started kubelet.service. Aug 13 00:54:43.409039 kubelet[1936]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:43.409542 kubelet[1936]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:54:43.409542 kubelet[1936]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:43.409723 kubelet[1936]: I0813 00:54:43.409589 1936 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:54:43.416974 kubelet[1936]: I0813 00:54:43.416916 1936 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:54:43.416974 kubelet[1936]: I0813 00:54:43.416963 1936 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:54:43.417358 kubelet[1936]: I0813 00:54:43.417326 1936 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:54:43.419085 kubelet[1936]: I0813 00:54:43.419052 1936 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:54:43.421743 kubelet[1936]: I0813 00:54:43.421691 1936 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:54:43.425275 kubelet[1936]: E0813 00:54:43.425243 1936 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:54:43.425275 kubelet[1936]: I0813 00:54:43.425272 1936 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:54:43.430401 kubelet[1936]: I0813 00:54:43.430360 1936 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:54:43.430742 kubelet[1936]: I0813 00:54:43.430618 1936 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:54:43.431022 kubelet[1936]: I0813 00:54:43.430676 1936 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:54:43.431146 kubelet[1936]: I0813 00:54:43.431042 1936 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:54:43.431146 kubelet[1936]: I0813 00:54:43.431060 1936 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:54:43.431146 kubelet[1936]: I0813 00:54:43.431116 1936 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:43.431297 kubelet[1936]: I0813 00:54:43.431271 1936 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:54:43.431349 kubelet[1936]: I0813 00:54:43.431310 1936 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:54:43.431349 kubelet[1936]: I0813 00:54:43.431334 1936 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:54:43.431349 kubelet[1936]: I0813 00:54:43.431347 1936 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:54:43.433214 kubelet[1936]: I0813 00:54:43.433187 1936 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:54:43.433722 kubelet[1936]: I0813 00:54:43.433707 1936 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:54:43.434520 kubelet[1936]: I0813 00:54:43.434505 1936 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:54:43.434661 kubelet[1936]: I0813 00:54:43.434646 1936 server.go:1287] "Started kubelet" Aug 13 00:54:43.434955 kubelet[1936]: I0813 00:54:43.434906 1936 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:54:43.435293 kubelet[1936]: I0813 00:54:43.435243 1936 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:54:43.435587 kubelet[1936]: I0813 00:54:43.435570 1936 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:54:43.436039 kubelet[1936]: I0813 00:54:43.436008 1936 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:54:43.438521 kubelet[1936]: I0813 00:54:43.438493 1936 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:54:43.438909 kubelet[1936]: I0813 00:54:43.438896 1936 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:54:43.439253 kubelet[1936]: E0813 00:54:43.439215 1936 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:54:43.439308 kubelet[1936]: E0813 00:54:43.439298 1936 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:54:43.439338 kubelet[1936]: I0813 00:54:43.439324 1936 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:54:43.439616 kubelet[1936]: I0813 00:54:43.439569 1936 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:54:43.439745 kubelet[1936]: I0813 00:54:43.439716 1936 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:54:43.440971 kubelet[1936]: I0813 00:54:43.440931 1936 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:54:43.441178 kubelet[1936]: I0813 00:54:43.441052 1936 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:54:43.445022 kubelet[1936]: I0813 00:54:43.443392 1936 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:54:43.467789 kubelet[1936]: I0813 00:54:43.467624 1936 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:54:43.468742 kubelet[1936]: I0813 00:54:43.468692 1936 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:54:43.468742 kubelet[1936]: I0813 00:54:43.468725 1936 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:54:43.469029 kubelet[1936]: I0813 00:54:43.468749 1936 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:54:43.469029 kubelet[1936]: I0813 00:54:43.468773 1936 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:54:43.469029 kubelet[1936]: E0813 00:54:43.468831 1936 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:54:43.485482 kubelet[1936]: I0813 00:54:43.485448 1936 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:54:43.485482 kubelet[1936]: I0813 00:54:43.485468 1936 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:54:43.485482 kubelet[1936]: I0813 00:54:43.485488 1936 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:43.485674 kubelet[1936]: I0813 00:54:43.485656 1936 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:54:43.485699 kubelet[1936]: I0813 00:54:43.485667 1936 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:54:43.485699 kubelet[1936]: I0813 00:54:43.485687 1936 policy_none.go:49] "None policy: Start" Aug 13 00:54:43.485699 kubelet[1936]: I0813 00:54:43.485698 1936 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:54:43.485801 kubelet[1936]: I0813 00:54:43.485710 1936 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:54:43.485861 kubelet[1936]: I0813 00:54:43.485846 1936 state_mem.go:75] "Updated machine memory state" Aug 13 00:54:43.489259 kubelet[1936]: I0813 00:54:43.489229 1936 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:54:43.489451 kubelet[1936]: I0813 00:54:43.489428 1936 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:54:43.489518 kubelet[1936]: I0813 00:54:43.489451 1936 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:54:43.489908 kubelet[1936]: I0813 00:54:43.489693 1936 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:54:43.490368 kubelet[1936]: E0813 00:54:43.490329 1936 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:54:43.570292 kubelet[1936]: I0813 00:54:43.570249 1936 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:43.570515 kubelet[1936]: I0813 00:54:43.570357 1936 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:43.570515 kubelet[1936]: I0813 00:54:43.570364 1936 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:43.592999 kubelet[1936]: I0813 00:54:43.592952 1936 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:54:43.603714 kubelet[1936]: I0813 00:54:43.603677 1936 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 13 00:54:43.603928 kubelet[1936]: I0813 00:54:43.603797 1936 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:54:43.640600 kubelet[1936]: I0813 00:54:43.640532 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ede828ba74dc04ff050eba500199ec0b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ede828ba74dc04ff050eba500199ec0b\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:43.640600 kubelet[1936]: I0813 00:54:43.640584 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:43.640848 kubelet[1936]: I0813 00:54:43.640634 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:43.640848 kubelet[1936]: I0813 00:54:43.640675 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:43.640848 kubelet[1936]: I0813 00:54:43.640792 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:43.640917 kubelet[1936]: I0813 00:54:43.640846 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ede828ba74dc04ff050eba500199ec0b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ede828ba74dc04ff050eba500199ec0b\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:43.640917 kubelet[1936]: I0813 00:54:43.640876 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ede828ba74dc04ff050eba500199ec0b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ede828ba74dc04ff050eba500199ec0b\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:43.640917 kubelet[1936]: I0813 00:54:43.640897 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:43.640917 kubelet[1936]: I0813 00:54:43.640913 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:54:43.876555 kubelet[1936]: E0813 00:54:43.876397 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:43.877028 kubelet[1936]: E0813 00:54:43.877004 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:43.877293 kubelet[1936]: E0813 00:54:43.877266 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:43.931731 sudo[1973]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:54:43.932003 sudo[1973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 00:54:44.433104 kubelet[1936]: I0813 00:54:44.433047 1936 apiserver.go:52] "Watching apiserver" Aug 13 00:54:44.439909 kubelet[1936]: I0813 00:54:44.439871 1936 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:54:44.442575 sudo[1973]: pam_unix(sudo:session): session closed for user root Aug 13 00:54:44.480141 kubelet[1936]: E0813 00:54:44.480097 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:44.480411 kubelet[1936]: I0813 00:54:44.480137 1936 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:44.482500 kubelet[1936]: I0813 00:54:44.480266 1936 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:44.488476 kubelet[1936]: E0813 00:54:44.488374 1936 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 00:54:44.488716 kubelet[1936]: E0813 00:54:44.488548 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:44.489736 kubelet[1936]: E0813 00:54:44.489699 1936 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:54:44.489854 kubelet[1936]: E0813 00:54:44.489835 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:44.502822 kubelet[1936]: I0813 00:54:44.502717 1936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.502671764 podStartE2EDuration="1.502671764s" podCreationTimestamp="2025-08-13 00:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:44.502394974 +0000 UTC m=+1.142901279" watchObservedRunningTime="2025-08-13 00:54:44.502671764 +0000 UTC m=+1.143178069" Aug 13 00:54:44.524789 kubelet[1936]: I0813 00:54:44.524667 1936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5246402780000001 podStartE2EDuration="1.524640278s" podCreationTimestamp="2025-08-13 00:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:44.513785659 +0000 UTC m=+1.154291965" watchObservedRunningTime="2025-08-13 00:54:44.524640278 +0000 UTC m=+1.165146583" Aug 13 00:54:45.481512 kubelet[1936]: E0813 00:54:45.481459 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:45.481512 kubelet[1936]: E0813 00:54:45.481481 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:46.569204 sudo[1311]: pam_unix(sudo:session): session closed for user root Aug 13 00:54:46.570442 sshd[1308]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:46.572505 systemd[1]: sshd@6-10.0.0.21:22-10.0.0.1:43464.service: Deactivated successfully. Aug 13 00:54:46.573331 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:54:46.573481 systemd[1]: session-7.scope: Consumed 5.009s CPU time. Aug 13 00:54:46.573990 systemd-logind[1189]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:54:46.574731 systemd-logind[1189]: Removed session 7. Aug 13 00:54:47.604197 kubelet[1936]: I0813 00:54:47.604146 1936 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:54:47.604728 env[1203]: time="2025-08-13T00:54:47.604502498Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:54:47.605030 kubelet[1936]: I0813 00:54:47.604722 1936 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:54:48.182677 kubelet[1936]: E0813 00:54:48.182593 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:48.399026 kubelet[1936]: I0813 00:54:48.398965 1936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.398947157 podStartE2EDuration="5.398947157s" podCreationTimestamp="2025-08-13 00:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:44.525343609 +0000 UTC m=+1.165849904" watchObservedRunningTime="2025-08-13 00:54:48.398947157 +0000 UTC m=+5.039453462" Aug 13 00:54:48.413457 systemd[1]: Created slice kubepods-besteffort-pod31811eed_bd40_4b82_a087_9dfdb8fb1959.slice. Aug 13 00:54:48.427087 systemd[1]: Created slice kubepods-burstable-pod85e7325e_7501_4104_81f7_1173751973ec.slice. Aug 13 00:54:48.450081 systemd[1]: Created slice kubepods-besteffort-podbfa68123_8c71_4d6b_a45c_875b89f6bf9d.slice. Aug 13 00:54:48.488128 kubelet[1936]: E0813 00:54:48.488091 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:48.572638 kubelet[1936]: I0813 00:54:48.572571 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-host-proc-sys-kernel\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.572638 kubelet[1936]: I0813 00:54:48.572623 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31811eed-bd40-4b82-a087-9dfdb8fb1959-xtables-lock\") pod \"kube-proxy-rnz6s\" (UID: \"31811eed-bd40-4b82-a087-9dfdb8fb1959\") " pod="kube-system/kube-proxy-rnz6s" Aug 13 00:54:48.572638 kubelet[1936]: I0813 00:54:48.572646 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31811eed-bd40-4b82-a087-9dfdb8fb1959-lib-modules\") pod \"kube-proxy-rnz6s\" (UID: \"31811eed-bd40-4b82-a087-9dfdb8fb1959\") " pod="kube-system/kube-proxy-rnz6s" Aug 13 00:54:48.572925 kubelet[1936]: I0813 00:54:48.572680 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-bpf-maps\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.572925 kubelet[1936]: I0813 00:54:48.572710 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfa68123-8c71-4d6b-a45c-875b89f6bf9d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dgq7n\" (UID: \"bfa68123-8c71-4d6b-a45c-875b89f6bf9d\") " pod="kube-system/cilium-operator-6c4d7847fc-dgq7n" Aug 13 00:54:48.572925 kubelet[1936]: I0813 00:54:48.572730 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-cilium-run\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.572925 kubelet[1936]: I0813 00:54:48.572797 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-lib-modules\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.572925 kubelet[1936]: I0813 00:54:48.572837 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/85e7325e-7501-4104-81f7-1173751973ec-clustermesh-secrets\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.573044 kubelet[1936]: I0813 00:54:48.572857 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/31811eed-bd40-4b82-a087-9dfdb8fb1959-kube-proxy\") pod \"kube-proxy-rnz6s\" (UID: \"31811eed-bd40-4b82-a087-9dfdb8fb1959\") " pod="kube-system/kube-proxy-rnz6s" Aug 13 00:54:48.573044 kubelet[1936]: I0813 00:54:48.572877 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf92g\" (UniqueName: \"kubernetes.io/projected/bfa68123-8c71-4d6b-a45c-875b89f6bf9d-kube-api-access-wf92g\") pod \"cilium-operator-6c4d7847fc-dgq7n\" (UID: \"bfa68123-8c71-4d6b-a45c-875b89f6bf9d\") " pod="kube-system/cilium-operator-6c4d7847fc-dgq7n" Aug 13 00:54:48.573044 kubelet[1936]: I0813 00:54:48.572903 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-cilium-cgroup\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.573044 kubelet[1936]: I0813 00:54:48.572932 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-etc-cni-netd\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.573044 kubelet[1936]: I0813 00:54:48.572952 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdkqj\" (UniqueName: \"kubernetes.io/projected/31811eed-bd40-4b82-a087-9dfdb8fb1959-kube-api-access-hdkqj\") pod \"kube-proxy-rnz6s\" (UID: \"31811eed-bd40-4b82-a087-9dfdb8fb1959\") " pod="kube-system/kube-proxy-rnz6s" Aug 13 00:54:48.573268 kubelet[1936]: I0813 00:54:48.572967 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-xtables-lock\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.573268 kubelet[1936]: I0813 00:54:48.572987 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85e7325e-7501-4104-81f7-1173751973ec-cilium-config-path\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.573268 kubelet[1936]: I0813 00:54:48.573007 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/85e7325e-7501-4104-81f7-1173751973ec-hubble-tls\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.573268 kubelet[1936]: I0813 00:54:48.573023 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-hostproc\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.573268 kubelet[1936]: I0813 00:54:48.573046 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-cni-path\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.573268 kubelet[1936]: I0813 00:54:48.573078 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-host-proc-sys-net\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.573433 kubelet[1936]: I0813 00:54:48.573101 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm4z6\" (UniqueName: \"kubernetes.io/projected/85e7325e-7501-4104-81f7-1173751973ec-kube-api-access-pm4z6\") pod \"cilium-8lwbn\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " pod="kube-system/cilium-8lwbn" Aug 13 00:54:48.675026 kubelet[1936]: I0813 00:54:48.674985 1936 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:54:48.724280 kubelet[1936]: E0813 00:54:48.723663 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:48.724433 env[1203]: time="2025-08-13T00:54:48.724392921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rnz6s,Uid:31811eed-bd40-4b82-a087-9dfdb8fb1959,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:48.731026 kubelet[1936]: E0813 00:54:48.730991 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:48.732175 env[1203]: time="2025-08-13T00:54:48.731542842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8lwbn,Uid:85e7325e-7501-4104-81f7-1173751973ec,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:48.749117 env[1203]: time="2025-08-13T00:54:48.749021680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:48.749117 env[1203]: time="2025-08-13T00:54:48.749091511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:48.749117 env[1203]: time="2025-08-13T00:54:48.749108112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:48.749549 env[1203]: time="2025-08-13T00:54:48.749446788Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2 pid=2048 runtime=io.containerd.runc.v2 Aug 13 00:54:48.749549 env[1203]: time="2025-08-13T00:54:48.749411462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:48.749549 env[1203]: time="2025-08-13T00:54:48.749446197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:48.749549 env[1203]: time="2025-08-13T00:54:48.749455685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:48.749690 env[1203]: time="2025-08-13T00:54:48.749577664Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/71a59367fac0cd50a25f4a0ebca5f166fd47181c8aad8d7b15916ca17175282f pid=2034 runtime=io.containerd.runc.v2 Aug 13 00:54:48.755097 kubelet[1936]: E0813 00:54:48.755061 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:48.756886 env[1203]: time="2025-08-13T00:54:48.756673203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dgq7n,Uid:bfa68123-8c71-4d6b-a45c-875b89f6bf9d,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:48.762613 systemd[1]: Started cri-containerd-71a59367fac0cd50a25f4a0ebca5f166fd47181c8aad8d7b15916ca17175282f.scope. Aug 13 00:54:48.767370 systemd[1]: Started cri-containerd-fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2.scope. Aug 13 00:54:48.780029 env[1203]: time="2025-08-13T00:54:48.779841921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:48.780029 env[1203]: time="2025-08-13T00:54:48.780003213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:48.780029 env[1203]: time="2025-08-13T00:54:48.780019475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:48.780286 env[1203]: time="2025-08-13T00:54:48.780228637Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99359e33d5990b9e2dd36decc8267659b4695da3d8762a9a7e299b71f5a66016 pid=2099 runtime=io.containerd.runc.v2 Aug 13 00:54:48.792229 systemd[1]: Started cri-containerd-99359e33d5990b9e2dd36decc8267659b4695da3d8762a9a7e299b71f5a66016.scope. Aug 13 00:54:48.794226 env[1203]: time="2025-08-13T00:54:48.794187097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8lwbn,Uid:85e7325e-7501-4104-81f7-1173751973ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\"" Aug 13 00:54:48.794915 kubelet[1936]: E0813 00:54:48.794890 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:48.799420 env[1203]: time="2025-08-13T00:54:48.799356441Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:54:48.805201 env[1203]: time="2025-08-13T00:54:48.805156899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rnz6s,Uid:31811eed-bd40-4b82-a087-9dfdb8fb1959,Namespace:kube-system,Attempt:0,} returns sandbox id \"71a59367fac0cd50a25f4a0ebca5f166fd47181c8aad8d7b15916ca17175282f\"" Aug 13 00:54:48.805959 kubelet[1936]: E0813 00:54:48.805921 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:48.809320 env[1203]: time="2025-08-13T00:54:48.809274948Z" level=info msg="CreateContainer within sandbox \"71a59367fac0cd50a25f4a0ebca5f166fd47181c8aad8d7b15916ca17175282f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:54:48.835846 env[1203]: time="2025-08-13T00:54:48.835788495Z" level=info msg="CreateContainer within sandbox \"71a59367fac0cd50a25f4a0ebca5f166fd47181c8aad8d7b15916ca17175282f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d695de246654e4614b89509b01e58a7bd49209c4f13b25b1027f6fdc643220c1\"" Aug 13 00:54:48.836557 env[1203]: time="2025-08-13T00:54:48.836521511Z" level=info msg="StartContainer for \"d695de246654e4614b89509b01e58a7bd49209c4f13b25b1027f6fdc643220c1\"" Aug 13 00:54:48.843356 env[1203]: time="2025-08-13T00:54:48.842309986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dgq7n,Uid:bfa68123-8c71-4d6b-a45c-875b89f6bf9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"99359e33d5990b9e2dd36decc8267659b4695da3d8762a9a7e299b71f5a66016\"" Aug 13 00:54:48.843540 kubelet[1936]: E0813 00:54:48.843207 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:48.857229 systemd[1]: Started cri-containerd-d695de246654e4614b89509b01e58a7bd49209c4f13b25b1027f6fdc643220c1.scope. Aug 13 00:54:48.886602 env[1203]: time="2025-08-13T00:54:48.886546490Z" level=info msg="StartContainer for \"d695de246654e4614b89509b01e58a7bd49209c4f13b25b1027f6fdc643220c1\" returns successfully" Aug 13 00:54:49.491606 kubelet[1936]: E0813 00:54:49.491572 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:49.501203 kubelet[1936]: I0813 00:54:49.501111 1936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rnz6s" podStartSLOduration=1.501086071 podStartE2EDuration="1.501086071s" podCreationTimestamp="2025-08-13 00:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:54:49.500932984 +0000 UTC m=+6.141439309" watchObservedRunningTime="2025-08-13 00:54:49.501086071 +0000 UTC m=+6.141592366" Aug 13 00:54:50.553136 kubelet[1936]: E0813 00:54:50.552131 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:51.494447 kubelet[1936]: E0813 00:54:51.494394 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:52.627421 kubelet[1936]: E0813 00:54:52.627384 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:53.497992 kubelet[1936]: E0813 00:54:53.497941 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:54.499370 kubelet[1936]: E0813 00:54:54.499331 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:54:58.045770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1614389414.mount: Deactivated successfully. Aug 13 00:55:05.431775 env[1203]: time="2025-08-13T00:55:05.431664761Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:05.707252 env[1203]: time="2025-08-13T00:55:05.707087338Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:05.825608 env[1203]: time="2025-08-13T00:55:05.825517608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:05.826504 env[1203]: time="2025-08-13T00:55:05.826454906Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:55:05.831369 env[1203]: time="2025-08-13T00:55:05.831316348Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:55:05.832349 env[1203]: time="2025-08-13T00:55:05.832314811Z" level=info msg="CreateContainer within sandbox \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:55:10.536359 env[1203]: time="2025-08-13T00:55:10.536283240Z" level=info msg="CreateContainer within sandbox \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1\"" Aug 13 00:55:10.536981 env[1203]: time="2025-08-13T00:55:10.536939441Z" level=info msg="StartContainer for \"92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1\"" Aug 13 00:55:10.560977 systemd[1]: Started cri-containerd-92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1.scope. Aug 13 00:55:10.595416 systemd[1]: cri-containerd-92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1.scope: Deactivated successfully. Aug 13 00:55:10.935791 env[1203]: time="2025-08-13T00:55:10.935728429Z" level=info msg="StartContainer for \"92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1\" returns successfully" Aug 13 00:55:11.149596 env[1203]: time="2025-08-13T00:55:11.149514979Z" level=info msg="shim disconnected" id=92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1 Aug 13 00:55:11.149596 env[1203]: time="2025-08-13T00:55:11.149593166Z" level=warning msg="cleaning up after shim disconnected" id=92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1 namespace=k8s.io Aug 13 00:55:11.149888 env[1203]: time="2025-08-13T00:55:11.149608945Z" level=info msg="cleaning up dead shim" Aug 13 00:55:11.157380 env[1203]: time="2025-08-13T00:55:11.157335900Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2370 runtime=io.containerd.runc.v2\n" Aug 13 00:55:11.326726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1-rootfs.mount: Deactivated successfully. Aug 13 00:55:11.530003 kubelet[1936]: E0813 00:55:11.529963 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:11.533271 env[1203]: time="2025-08-13T00:55:11.532120137Z" level=info msg="CreateContainer within sandbox \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:55:11.536797 systemd[1]: Started sshd@7-10.0.0.21:22-10.0.0.1:55666.service. Aug 13 00:55:11.600832 sshd[2383]: Accepted publickey for core from 10.0.0.1 port 55666 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:11.602210 sshd[2383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:11.605999 systemd-logind[1189]: New session 8 of user core. Aug 13 00:55:11.606851 systemd[1]: Started session-8.scope. Aug 13 00:55:11.743078 sshd[2383]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:11.745330 systemd[1]: sshd@7-10.0.0.21:22-10.0.0.1:55666.service: Deactivated successfully. Aug 13 00:55:11.746026 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:55:11.746519 systemd-logind[1189]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:55:11.747191 systemd-logind[1189]: Removed session 8. Aug 13 00:55:11.755511 env[1203]: time="2025-08-13T00:55:11.755429070Z" level=info msg="CreateContainer within sandbox \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196\"" Aug 13 00:55:11.756268 env[1203]: time="2025-08-13T00:55:11.756197000Z" level=info msg="StartContainer for \"d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196\"" Aug 13 00:55:11.774332 systemd[1]: Started cri-containerd-d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196.scope. Aug 13 00:55:11.813271 env[1203]: time="2025-08-13T00:55:11.813201369Z" level=info msg="StartContainer for \"d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196\" returns successfully" Aug 13 00:55:11.821483 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:55:11.821737 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:55:11.821956 systemd[1]: Stopping systemd-sysctl.service... Aug 13 00:55:11.823693 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:55:11.831008 systemd[1]: cri-containerd-d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196.scope: Deactivated successfully. Aug 13 00:55:11.831816 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:55:11.856675 env[1203]: time="2025-08-13T00:55:11.856508683Z" level=info msg="shim disconnected" id=d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196 Aug 13 00:55:11.856675 env[1203]: time="2025-08-13T00:55:11.856563796Z" level=warning msg="cleaning up after shim disconnected" id=d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196 namespace=k8s.io Aug 13 00:55:11.856675 env[1203]: time="2025-08-13T00:55:11.856573244Z" level=info msg="cleaning up dead shim" Aug 13 00:55:11.863096 env[1203]: time="2025-08-13T00:55:11.863058229Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2451 runtime=io.containerd.runc.v2\n" Aug 13 00:55:12.326432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196-rootfs.mount: Deactivated successfully. Aug 13 00:55:12.533884 kubelet[1936]: E0813 00:55:12.533245 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:12.538104 env[1203]: time="2025-08-13T00:55:12.538055989Z" level=info msg="CreateContainer within sandbox \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:55:13.105748 env[1203]: time="2025-08-13T00:55:13.105685104Z" level=info msg="CreateContainer within sandbox \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9\"" Aug 13 00:55:13.106599 env[1203]: time="2025-08-13T00:55:13.106559635Z" level=info msg="StartContainer for \"cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9\"" Aug 13 00:55:13.125332 systemd[1]: Started cri-containerd-cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9.scope. Aug 13 00:55:13.256538 systemd[1]: cri-containerd-cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9.scope: Deactivated successfully. Aug 13 00:55:13.541409 env[1203]: time="2025-08-13T00:55:13.540934225Z" level=info msg="StartContainer for \"cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9\" returns successfully" Aug 13 00:55:13.558672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9-rootfs.mount: Deactivated successfully. Aug 13 00:55:14.245126 env[1203]: time="2025-08-13T00:55:14.245064817Z" level=info msg="shim disconnected" id=cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9 Aug 13 00:55:14.245126 env[1203]: time="2025-08-13T00:55:14.245119609Z" level=warning msg="cleaning up after shim disconnected" id=cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9 namespace=k8s.io Aug 13 00:55:14.245126 env[1203]: time="2025-08-13T00:55:14.245130340Z" level=info msg="cleaning up dead shim" Aug 13 00:55:14.251917 env[1203]: time="2025-08-13T00:55:14.251859513Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2508 runtime=io.containerd.runc.v2\n" Aug 13 00:55:14.547265 kubelet[1936]: E0813 00:55:14.547133 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:14.548861 env[1203]: time="2025-08-13T00:55:14.548820328Z" level=info msg="CreateContainer within sandbox \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:55:14.965013 env[1203]: time="2025-08-13T00:55:14.963871260Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:14.968541 env[1203]: time="2025-08-13T00:55:14.968501727Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:14.968928 env[1203]: time="2025-08-13T00:55:14.968877202Z" level=info msg="CreateContainer within sandbox \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7\"" Aug 13 00:55:14.969644 env[1203]: time="2025-08-13T00:55:14.969606640Z" level=info msg="StartContainer for \"7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7\"" Aug 13 00:55:14.970526 env[1203]: time="2025-08-13T00:55:14.970490789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:14.970864 env[1203]: time="2025-08-13T00:55:14.970836387Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:55:14.973719 env[1203]: time="2025-08-13T00:55:14.973662078Z" level=info msg="CreateContainer within sandbox \"99359e33d5990b9e2dd36decc8267659b4695da3d8762a9a7e299b71f5a66016\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:55:14.988185 env[1203]: time="2025-08-13T00:55:14.988121573Z" level=info msg="CreateContainer within sandbox \"99359e33d5990b9e2dd36decc8267659b4695da3d8762a9a7e299b71f5a66016\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\"" Aug 13 00:55:14.989223 env[1203]: time="2025-08-13T00:55:14.989164980Z" level=info msg="StartContainer for \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\"" Aug 13 00:55:14.993786 systemd[1]: Started cri-containerd-7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7.scope. Aug 13 00:55:15.012329 systemd[1]: Started cri-containerd-a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347.scope. Aug 13 00:55:15.030746 systemd[1]: cri-containerd-7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7.scope: Deactivated successfully. Aug 13 00:55:15.302499 env[1203]: time="2025-08-13T00:55:15.302378098Z" level=info msg="StartContainer for \"7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7\" returns successfully" Aug 13 00:55:15.355479 env[1203]: time="2025-08-13T00:55:15.355420230Z" level=info msg="StartContainer for \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\" returns successfully" Aug 13 00:55:15.387119 env[1203]: time="2025-08-13T00:55:15.387043472Z" level=info msg="shim disconnected" id=7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7 Aug 13 00:55:15.387119 env[1203]: time="2025-08-13T00:55:15.387112171Z" level=warning msg="cleaning up after shim disconnected" id=7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7 namespace=k8s.io Aug 13 00:55:15.387119 env[1203]: time="2025-08-13T00:55:15.387127009Z" level=info msg="cleaning up dead shim" Aug 13 00:55:15.404536 env[1203]: time="2025-08-13T00:55:15.404459624Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2602 runtime=io.containerd.runc.v2\n" Aug 13 00:55:15.556204 kubelet[1936]: E0813 00:55:15.556057 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:15.556204 kubelet[1936]: E0813 00:55:15.556105 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:15.558996 env[1203]: time="2025-08-13T00:55:15.558936357Z" level=info msg="CreateContainer within sandbox \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:55:15.579060 env[1203]: time="2025-08-13T00:55:15.579006748Z" level=info msg="CreateContainer within sandbox \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\"" Aug 13 00:55:15.579815 env[1203]: time="2025-08-13T00:55:15.579788654Z" level=info msg="StartContainer for \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\"" Aug 13 00:55:15.587384 kubelet[1936]: I0813 00:55:15.587321 1936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dgq7n" podStartSLOduration=1.459409929 podStartE2EDuration="27.587293252s" podCreationTimestamp="2025-08-13 00:54:48 +0000 UTC" firstStartedPulling="2025-08-13 00:54:48.84412345 +0000 UTC m=+5.484629775" lastFinishedPulling="2025-08-13 00:55:14.972006792 +0000 UTC m=+31.612513098" observedRunningTime="2025-08-13 00:55:15.572522985 +0000 UTC m=+32.213029310" watchObservedRunningTime="2025-08-13 00:55:15.587293252 +0000 UTC m=+32.227799557" Aug 13 00:55:15.595624 systemd[1]: Started cri-containerd-4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605.scope. Aug 13 00:55:15.964042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7-rootfs.mount: Deactivated successfully. Aug 13 00:55:16.038603 env[1203]: time="2025-08-13T00:55:16.038207585Z" level=info msg="StartContainer for \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\" returns successfully" Aug 13 00:55:16.054099 systemd[1]: run-containerd-runc-k8s.io-4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605-runc.8ZJFyo.mount: Deactivated successfully. Aug 13 00:55:16.154072 kubelet[1936]: I0813 00:55:16.154031 1936 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:55:16.241923 systemd[1]: Created slice kubepods-burstable-pod1de871da_bd4c_4b27_a97d_a16c629da180.slice. Aug 13 00:55:16.248645 systemd[1]: Created slice kubepods-burstable-podcc503a57_59e3_4aca_a790_ed73b8f7ffcf.slice. Aug 13 00:55:16.352397 kubelet[1936]: I0813 00:55:16.352338 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x94v\" (UniqueName: \"kubernetes.io/projected/1de871da-bd4c-4b27-a97d-a16c629da180-kube-api-access-4x94v\") pod \"coredns-668d6bf9bc-qnrkk\" (UID: \"1de871da-bd4c-4b27-a97d-a16c629da180\") " pod="kube-system/coredns-668d6bf9bc-qnrkk" Aug 13 00:55:16.352397 kubelet[1936]: I0813 00:55:16.352391 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1de871da-bd4c-4b27-a97d-a16c629da180-config-volume\") pod \"coredns-668d6bf9bc-qnrkk\" (UID: \"1de871da-bd4c-4b27-a97d-a16c629da180\") " pod="kube-system/coredns-668d6bf9bc-qnrkk" Aug 13 00:55:16.352397 kubelet[1936]: I0813 00:55:16.352413 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc503a57-59e3-4aca-a790-ed73b8f7ffcf-config-volume\") pod \"coredns-668d6bf9bc-7lkrw\" (UID: \"cc503a57-59e3-4aca-a790-ed73b8f7ffcf\") " pod="kube-system/coredns-668d6bf9bc-7lkrw" Aug 13 00:55:16.352674 kubelet[1936]: I0813 00:55:16.352428 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxsz2\" (UniqueName: \"kubernetes.io/projected/cc503a57-59e3-4aca-a790-ed73b8f7ffcf-kube-api-access-jxsz2\") pod \"coredns-668d6bf9bc-7lkrw\" (UID: \"cc503a57-59e3-4aca-a790-ed73b8f7ffcf\") " pod="kube-system/coredns-668d6bf9bc-7lkrw" Aug 13 00:55:16.544456 kubelet[1936]: E0813 00:55:16.544295 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:16.549210 env[1203]: time="2025-08-13T00:55:16.549148073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnrkk,Uid:1de871da-bd4c-4b27-a97d-a16c629da180,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:16.552403 kubelet[1936]: E0813 00:55:16.552354 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:16.552970 env[1203]: time="2025-08-13T00:55:16.552912444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lkrw,Uid:cc503a57-59e3-4aca-a790-ed73b8f7ffcf,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:16.562787 kubelet[1936]: E0813 00:55:16.562723 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:16.563180 kubelet[1936]: E0813 00:55:16.562966 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:16.578167 kubelet[1936]: I0813 00:55:16.578106 1936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8lwbn" podStartSLOduration=11.545589452 podStartE2EDuration="28.578083102s" podCreationTimestamp="2025-08-13 00:54:48 +0000 UTC" firstStartedPulling="2025-08-13 00:54:48.798528767 +0000 UTC m=+5.439035072" lastFinishedPulling="2025-08-13 00:55:05.831022426 +0000 UTC m=+22.471528722" observedRunningTime="2025-08-13 00:55:16.577689764 +0000 UTC m=+33.218196059" watchObservedRunningTime="2025-08-13 00:55:16.578083102 +0000 UTC m=+33.218589407" Aug 13 00:55:16.747297 systemd[1]: Started sshd@8-10.0.0.21:22-10.0.0.1:55674.service. Aug 13 00:55:16.790641 sshd[2788]: Accepted publickey for core from 10.0.0.1 port 55674 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:16.792039 sshd[2788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:16.795461 systemd-logind[1189]: New session 9 of user core. Aug 13 00:55:16.796304 systemd[1]: Started session-9.scope. Aug 13 00:55:16.915391 sshd[2788]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:16.917881 systemd[1]: sshd@8-10.0.0.21:22-10.0.0.1:55674.service: Deactivated successfully. Aug 13 00:55:16.918541 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:55:16.919009 systemd-logind[1189]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:55:16.919731 systemd-logind[1189]: Removed session 9. Aug 13 00:55:17.564811 kubelet[1936]: E0813 00:55:17.564748 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:17.878109 systemd-networkd[1020]: cilium_host: Link UP Aug 13 00:55:17.878222 systemd-networkd[1020]: cilium_net: Link UP Aug 13 00:55:17.880446 systemd-networkd[1020]: cilium_net: Gained carrier Aug 13 00:55:17.881528 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 13 00:55:17.881656 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 00:55:17.881839 systemd-networkd[1020]: cilium_host: Gained carrier Aug 13 00:55:17.882032 systemd-networkd[1020]: cilium_net: Gained IPv6LL Aug 13 00:55:17.882167 systemd-networkd[1020]: cilium_host: Gained IPv6LL Aug 13 00:55:17.960423 systemd-networkd[1020]: cilium_vxlan: Link UP Aug 13 00:55:17.960432 systemd-networkd[1020]: cilium_vxlan: Gained carrier Aug 13 00:55:18.159799 kernel: NET: Registered PF_ALG protocol family Aug 13 00:55:18.566833 kubelet[1936]: E0813 00:55:18.566715 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:18.735746 systemd-networkd[1020]: lxc_health: Link UP Aug 13 00:55:18.755974 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:55:18.755683 systemd-networkd[1020]: lxc_health: Gained carrier Aug 13 00:55:19.094212 systemd-networkd[1020]: lxcb13808c4f9cc: Link UP Aug 13 00:55:19.117787 kernel: eth0: renamed from tmp3364f Aug 13 00:55:19.148959 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:55:19.149080 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb13808c4f9cc: link becomes ready Aug 13 00:55:19.149160 systemd-networkd[1020]: lxcb13808c4f9cc: Gained carrier Aug 13 00:55:19.149319 systemd-networkd[1020]: lxced33ed9b1e24: Link UP Aug 13 00:55:19.158894 kernel: eth0: renamed from tmp19a91 Aug 13 00:55:19.163955 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxced33ed9b1e24: link becomes ready Aug 13 00:55:19.163692 systemd-networkd[1020]: lxced33ed9b1e24: Gained carrier Aug 13 00:55:19.211936 systemd-networkd[1020]: cilium_vxlan: Gained IPv6LL Aug 13 00:55:19.568989 kubelet[1936]: E0813 00:55:19.568951 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:19.979999 systemd-networkd[1020]: lxc_health: Gained IPv6LL Aug 13 00:55:20.571370 kubelet[1936]: E0813 00:55:20.571324 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:20.811933 systemd-networkd[1020]: lxced33ed9b1e24: Gained IPv6LL Aug 13 00:55:21.004919 systemd-networkd[1020]: lxcb13808c4f9cc: Gained IPv6LL Aug 13 00:55:21.573216 kubelet[1936]: E0813 00:55:21.573171 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:21.921328 systemd[1]: Started sshd@9-10.0.0.21:22-10.0.0.1:50284.service. Aug 13 00:55:21.964369 sshd[3185]: Accepted publickey for core from 10.0.0.1 port 50284 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:21.965534 sshd[3185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:21.969440 systemd-logind[1189]: New session 10 of user core. Aug 13 00:55:21.970144 systemd[1]: Started session-10.scope. Aug 13 00:55:22.092350 sshd[3185]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:22.095113 systemd[1]: sshd@9-10.0.0.21:22-10.0.0.1:50284.service: Deactivated successfully. Aug 13 00:55:22.095863 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:55:22.096731 systemd-logind[1189]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:55:22.097563 systemd-logind[1189]: Removed session 10. Aug 13 00:55:22.507979 env[1203]: time="2025-08-13T00:55:22.507245511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:22.507979 env[1203]: time="2025-08-13T00:55:22.507285716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:22.507979 env[1203]: time="2025-08-13T00:55:22.507295154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:22.507979 env[1203]: time="2025-08-13T00:55:22.507794792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3364ff7b35c92f005767592896a3103cabf749b68669e6667473212f2e4422a1 pid=3216 runtime=io.containerd.runc.v2 Aug 13 00:55:22.508987 env[1203]: time="2025-08-13T00:55:22.508929270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:22.509142 env[1203]: time="2025-08-13T00:55:22.509117333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:22.509250 env[1203]: time="2025-08-13T00:55:22.509220145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:22.509490 env[1203]: time="2025-08-13T00:55:22.509444526Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/19a91be9828c751c79f693cb3f73ef2b1b28e68f744c56ca28defb63da6ae377 pid=3224 runtime=io.containerd.runc.v2 Aug 13 00:55:22.521325 systemd[1]: Started cri-containerd-19a91be9828c751c79f693cb3f73ef2b1b28e68f744c56ca28defb63da6ae377.scope. Aug 13 00:55:22.534696 systemd[1]: Started cri-containerd-3364ff7b35c92f005767592896a3103cabf749b68669e6667473212f2e4422a1.scope. Aug 13 00:55:22.538286 systemd-resolved[1138]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:22.546945 systemd-resolved[1138]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:55:22.564021 env[1203]: time="2025-08-13T00:55:22.563981478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lkrw,Uid:cc503a57-59e3-4aca-a790-ed73b8f7ffcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"19a91be9828c751c79f693cb3f73ef2b1b28e68f744c56ca28defb63da6ae377\"" Aug 13 00:55:22.564691 kubelet[1936]: E0813 00:55:22.564653 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:22.567417 env[1203]: time="2025-08-13T00:55:22.567074499Z" level=info msg="CreateContainer within sandbox \"19a91be9828c751c79f693cb3f73ef2b1b28e68f744c56ca28defb63da6ae377\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:55:22.571316 env[1203]: time="2025-08-13T00:55:22.571245464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qnrkk,Uid:1de871da-bd4c-4b27-a97d-a16c629da180,Namespace:kube-system,Attempt:0,} returns sandbox id \"3364ff7b35c92f005767592896a3103cabf749b68669e6667473212f2e4422a1\"" Aug 13 00:55:22.572419 kubelet[1936]: E0813 00:55:22.572389 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:22.574225 env[1203]: time="2025-08-13T00:55:22.574195447Z" level=info msg="CreateContainer within sandbox \"3364ff7b35c92f005767592896a3103cabf749b68669e6667473212f2e4422a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:55:23.516347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2711288906.mount: Deactivated successfully. Aug 13 00:55:24.248503 env[1203]: time="2025-08-13T00:55:24.248425765Z" level=info msg="CreateContainer within sandbox \"19a91be9828c751c79f693cb3f73ef2b1b28e68f744c56ca28defb63da6ae377\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d644bfa1f2234894908c08ab1f93e5ee308ebab1c196455cb02cc292f8442ad9\"" Aug 13 00:55:24.250531 env[1203]: time="2025-08-13T00:55:24.249105831Z" level=info msg="StartContainer for \"d644bfa1f2234894908c08ab1f93e5ee308ebab1c196455cb02cc292f8442ad9\"" Aug 13 00:55:24.267738 systemd[1]: Started cri-containerd-d644bfa1f2234894908c08ab1f93e5ee308ebab1c196455cb02cc292f8442ad9.scope. Aug 13 00:55:24.275667 env[1203]: time="2025-08-13T00:55:24.275548311Z" level=info msg="CreateContainer within sandbox \"3364ff7b35c92f005767592896a3103cabf749b68669e6667473212f2e4422a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45953854d18cf20c006fef0e3730d9331bb1b38779fc7ae748669f67bbdcf975\"" Aug 13 00:55:24.277952 env[1203]: time="2025-08-13T00:55:24.276480950Z" level=info msg="StartContainer for \"45953854d18cf20c006fef0e3730d9331bb1b38779fc7ae748669f67bbdcf975\"" Aug 13 00:55:24.297196 systemd[1]: Started cri-containerd-45953854d18cf20c006fef0e3730d9331bb1b38779fc7ae748669f67bbdcf975.scope. Aug 13 00:55:24.343566 env[1203]: time="2025-08-13T00:55:24.343473936Z" level=info msg="StartContainer for \"d644bfa1f2234894908c08ab1f93e5ee308ebab1c196455cb02cc292f8442ad9\" returns successfully" Aug 13 00:55:24.371302 env[1203]: time="2025-08-13T00:55:24.371224069Z" level=info msg="StartContainer for \"45953854d18cf20c006fef0e3730d9331bb1b38779fc7ae748669f67bbdcf975\" returns successfully" Aug 13 00:55:24.516971 systemd[1]: run-containerd-runc-k8s.io-d644bfa1f2234894908c08ab1f93e5ee308ebab1c196455cb02cc292f8442ad9-runc.XPPNY9.mount: Deactivated successfully. Aug 13 00:55:24.581295 kubelet[1936]: E0813 00:55:24.581158 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:24.583325 kubelet[1936]: E0813 00:55:24.583297 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:24.601852 kubelet[1936]: I0813 00:55:24.601788 1936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qnrkk" podStartSLOduration=36.6017662 podStartE2EDuration="36.6017662s" podCreationTimestamp="2025-08-13 00:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:24.592461596 +0000 UTC m=+41.232967901" watchObservedRunningTime="2025-08-13 00:55:24.6017662 +0000 UTC m=+41.242272506" Aug 13 00:55:25.584779 kubelet[1936]: E0813 00:55:25.584723 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:25.585175 kubelet[1936]: E0813 00:55:25.584812 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:26.586156 kubelet[1936]: E0813 00:55:26.586100 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:26.586580 kubelet[1936]: E0813 00:55:26.586180 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:27.097147 systemd[1]: Started sshd@10-10.0.0.21:22-10.0.0.1:50292.service. Aug 13 00:55:27.140886 sshd[3371]: Accepted publickey for core from 10.0.0.1 port 50292 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:27.142558 sshd[3371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:27.147058 systemd-logind[1189]: New session 11 of user core. Aug 13 00:55:27.148139 systemd[1]: Started session-11.scope. Aug 13 00:55:27.478182 sshd[3371]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:27.481662 systemd[1]: sshd@10-10.0.0.21:22-10.0.0.1:50292.service: Deactivated successfully. Aug 13 00:55:27.482385 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:55:27.482967 systemd-logind[1189]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:55:27.483773 systemd-logind[1189]: Removed session 11. Aug 13 00:55:32.273226 systemd[1]: Started sshd@11-10.0.0.21:22-10.0.0.1:37510.service. Aug 13 00:55:32.316129 sshd[3385]: Accepted publickey for core from 10.0.0.1 port 37510 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:32.317770 sshd[3385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:32.321160 systemd-logind[1189]: New session 12 of user core. Aug 13 00:55:32.322113 systemd[1]: Started session-12.scope. Aug 13 00:55:32.460294 sshd[3385]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:32.462726 systemd[1]: sshd@11-10.0.0.21:22-10.0.0.1:37510.service: Deactivated successfully. Aug 13 00:55:32.463439 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:55:32.463993 systemd-logind[1189]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:55:32.464730 systemd-logind[1189]: Removed session 12. Aug 13 00:55:37.465473 systemd[1]: Started sshd@12-10.0.0.21:22-10.0.0.1:37522.service. Aug 13 00:55:37.508029 sshd[3400]: Accepted publickey for core from 10.0.0.1 port 37522 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:37.509369 sshd[3400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:37.512691 systemd-logind[1189]: New session 13 of user core. Aug 13 00:55:37.513556 systemd[1]: Started session-13.scope. Aug 13 00:55:37.626010 sshd[3400]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:37.628997 systemd[1]: sshd@12-10.0.0.21:22-10.0.0.1:37522.service: Deactivated successfully. Aug 13 00:55:37.629529 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:55:37.630090 systemd-logind[1189]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:55:37.631093 systemd[1]: Started sshd@13-10.0.0.21:22-10.0.0.1:37532.service. Aug 13 00:55:37.631993 systemd-logind[1189]: Removed session 13. Aug 13 00:55:37.675114 sshd[3414]: Accepted publickey for core from 10.0.0.1 port 37532 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:37.676545 sshd[3414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:37.680236 systemd-logind[1189]: New session 14 of user core. Aug 13 00:55:37.681046 systemd[1]: Started session-14.scope. Aug 13 00:55:37.849027 sshd[3414]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:37.852650 systemd[1]: Started sshd@14-10.0.0.21:22-10.0.0.1:37548.service. Aug 13 00:55:37.853110 systemd[1]: sshd@13-10.0.0.21:22-10.0.0.1:37532.service: Deactivated successfully. Aug 13 00:55:37.855459 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:55:37.856111 systemd-logind[1189]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:55:37.857396 systemd-logind[1189]: Removed session 14. Aug 13 00:55:37.902091 sshd[3424]: Accepted publickey for core from 10.0.0.1 port 37548 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:37.903715 sshd[3424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:37.908090 systemd-logind[1189]: New session 15 of user core. Aug 13 00:55:37.909121 systemd[1]: Started session-15.scope. Aug 13 00:55:38.027462 sshd[3424]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:38.030014 systemd[1]: sshd@14-10.0.0.21:22-10.0.0.1:37548.service: Deactivated successfully. Aug 13 00:55:38.030737 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:55:38.031622 systemd-logind[1189]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:55:38.032384 systemd-logind[1189]: Removed session 15. Aug 13 00:55:43.032848 systemd[1]: Started sshd@15-10.0.0.21:22-10.0.0.1:53508.service. Aug 13 00:55:43.075850 sshd[3438]: Accepted publickey for core from 10.0.0.1 port 53508 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:43.077305 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:43.081536 systemd-logind[1189]: New session 16 of user core. Aug 13 00:55:43.082578 systemd[1]: Started session-16.scope. Aug 13 00:55:43.198557 sshd[3438]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:43.201846 systemd[1]: sshd@15-10.0.0.21:22-10.0.0.1:53508.service: Deactivated successfully. Aug 13 00:55:43.202694 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:55:43.203397 systemd-logind[1189]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:55:43.204345 systemd-logind[1189]: Removed session 16. Aug 13 00:55:48.203679 systemd[1]: Started sshd@16-10.0.0.21:22-10.0.0.1:53522.service. Aug 13 00:55:48.243614 sshd[3454]: Accepted publickey for core from 10.0.0.1 port 53522 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:48.244735 sshd[3454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:48.248041 systemd-logind[1189]: New session 17 of user core. Aug 13 00:55:48.248824 systemd[1]: Started session-17.scope. Aug 13 00:55:48.357345 sshd[3454]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:48.360741 systemd[1]: sshd@16-10.0.0.21:22-10.0.0.1:53522.service: Deactivated successfully. Aug 13 00:55:48.361438 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:55:48.362069 systemd-logind[1189]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:55:48.363284 systemd[1]: Started sshd@17-10.0.0.21:22-10.0.0.1:53528.service. Aug 13 00:55:48.364103 systemd-logind[1189]: Removed session 17. Aug 13 00:55:48.403778 sshd[3468]: Accepted publickey for core from 10.0.0.1 port 53528 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:48.405132 sshd[3468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:48.408915 systemd-logind[1189]: New session 18 of user core. Aug 13 00:55:48.410115 systemd[1]: Started session-18.scope. Aug 13 00:55:48.798271 sshd[3468]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:48.801896 systemd[1]: sshd@17-10.0.0.21:22-10.0.0.1:53528.service: Deactivated successfully. Aug 13 00:55:48.802702 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:55:48.803698 systemd-logind[1189]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:55:48.805218 systemd[1]: Started sshd@18-10.0.0.21:22-10.0.0.1:53538.service. Aug 13 00:55:48.805941 systemd-logind[1189]: Removed session 18. Aug 13 00:55:48.847087 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 53538 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:48.848187 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:48.851588 systemd-logind[1189]: New session 19 of user core. Aug 13 00:55:48.852383 systemd[1]: Started session-19.scope. Aug 13 00:55:49.334381 sshd[3480]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:49.336767 systemd[1]: Started sshd@19-10.0.0.21:22-10.0.0.1:53546.service. Aug 13 00:55:49.337215 systemd[1]: sshd@18-10.0.0.21:22-10.0.0.1:53538.service: Deactivated successfully. Aug 13 00:55:49.337775 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:55:49.338926 systemd-logind[1189]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:55:49.339825 systemd-logind[1189]: Removed session 19. Aug 13 00:55:49.380446 sshd[3500]: Accepted publickey for core from 10.0.0.1 port 53546 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:49.382022 sshd[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:49.385696 systemd-logind[1189]: New session 20 of user core. Aug 13 00:55:49.386637 systemd[1]: Started session-20.scope. Aug 13 00:55:49.626925 sshd[3500]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:49.631080 systemd[1]: Started sshd@20-10.0.0.21:22-10.0.0.1:53548.service. Aug 13 00:55:49.631685 systemd[1]: sshd@19-10.0.0.21:22-10.0.0.1:53546.service: Deactivated successfully. Aug 13 00:55:49.632871 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:55:49.633475 systemd-logind[1189]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:55:49.634448 systemd-logind[1189]: Removed session 20. Aug 13 00:55:49.671552 sshd[3511]: Accepted publickey for core from 10.0.0.1 port 53548 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:49.672922 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:49.676273 systemd-logind[1189]: New session 21 of user core. Aug 13 00:55:49.677090 systemd[1]: Started session-21.scope. Aug 13 00:55:49.788364 sshd[3511]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:49.790853 systemd[1]: sshd@20-10.0.0.21:22-10.0.0.1:53548.service: Deactivated successfully. Aug 13 00:55:49.791543 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:55:49.792114 systemd-logind[1189]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:55:49.792720 systemd-logind[1189]: Removed session 21. Aug 13 00:55:54.792963 systemd[1]: Started sshd@21-10.0.0.21:22-10.0.0.1:44576.service. Aug 13 00:55:54.831827 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 44576 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:54.832828 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:54.835947 systemd-logind[1189]: New session 22 of user core. Aug 13 00:55:54.836776 systemd[1]: Started session-22.scope. Aug 13 00:55:54.935198 sshd[3527]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:54.937201 systemd[1]: sshd@21-10.0.0.21:22-10.0.0.1:44576.service: Deactivated successfully. Aug 13 00:55:54.937844 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:55:54.938555 systemd-logind[1189]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:55:54.939276 systemd-logind[1189]: Removed session 22. Aug 13 00:55:55.470101 kubelet[1936]: E0813 00:55:55.470058 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:55:59.939572 systemd[1]: Started sshd@22-10.0.0.21:22-10.0.0.1:58386.service. Aug 13 00:55:59.987001 sshd[3542]: Accepted publickey for core from 10.0.0.1 port 58386 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:55:59.988395 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:55:59.992009 systemd-logind[1189]: New session 23 of user core. Aug 13 00:55:59.992838 systemd[1]: Started session-23.scope. Aug 13 00:56:00.101044 sshd[3542]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:00.104292 systemd[1]: sshd@22-10.0.0.21:22-10.0.0.1:58386.service: Deactivated successfully. Aug 13 00:56:00.105265 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:56:00.105836 systemd-logind[1189]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:56:00.106576 systemd-logind[1189]: Removed session 23. Aug 13 00:56:05.106078 systemd[1]: Started sshd@23-10.0.0.21:22-10.0.0.1:58392.service. Aug 13 00:56:05.148889 sshd[3556]: Accepted publickey for core from 10.0.0.1 port 58392 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:56:05.150370 sshd[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:05.154682 systemd-logind[1189]: New session 24 of user core. Aug 13 00:56:05.155965 systemd[1]: Started session-24.scope. Aug 13 00:56:05.265081 sshd[3556]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:05.268007 systemd[1]: sshd@23-10.0.0.21:22-10.0.0.1:58392.service: Deactivated successfully. Aug 13 00:56:05.268831 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:56:05.269547 systemd-logind[1189]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:56:05.270453 systemd-logind[1189]: Removed session 24. Aug 13 00:56:07.470335 kubelet[1936]: E0813 00:56:07.470269 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:10.270722 systemd[1]: Started sshd@24-10.0.0.21:22-10.0.0.1:60892.service. Aug 13 00:56:10.315380 sshd[3569]: Accepted publickey for core from 10.0.0.1 port 60892 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:56:10.317088 sshd[3569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:10.321009 systemd-logind[1189]: New session 25 of user core. Aug 13 00:56:10.322076 systemd[1]: Started session-25.scope. Aug 13 00:56:10.435537 sshd[3569]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:10.439026 systemd[1]: sshd@24-10.0.0.21:22-10.0.0.1:60892.service: Deactivated successfully. Aug 13 00:56:10.439594 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:56:10.441205 systemd[1]: Started sshd@25-10.0.0.21:22-10.0.0.1:60894.service. Aug 13 00:56:10.442305 systemd-logind[1189]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:56:10.443252 systemd-logind[1189]: Removed session 25. Aug 13 00:56:10.485022 sshd[3583]: Accepted publickey for core from 10.0.0.1 port 60894 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:56:10.486783 sshd[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:10.491048 systemd-logind[1189]: New session 26 of user core. Aug 13 00:56:10.492004 systemd[1]: Started session-26.scope. Aug 13 00:56:11.991898 kubelet[1936]: I0813 00:56:11.991811 1936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7lkrw" podStartSLOduration=83.991792163 podStartE2EDuration="1m23.991792163s" podCreationTimestamp="2025-08-13 00:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:24.614165098 +0000 UTC m=+41.254671403" watchObservedRunningTime="2025-08-13 00:56:11.991792163 +0000 UTC m=+88.632298468" Aug 13 00:56:12.024835 env[1203]: time="2025-08-13T00:56:12.024779687Z" level=info msg="StopContainer for \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\" with timeout 30 (s)" Aug 13 00:56:12.026423 env[1203]: time="2025-08-13T00:56:12.026374255Z" level=info msg="Stop container \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\" with signal terminated" Aug 13 00:56:12.027752 env[1203]: time="2025-08-13T00:56:12.027704141Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:56:12.033183 env[1203]: time="2025-08-13T00:56:12.033145794Z" level=info msg="StopContainer for \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\" with timeout 2 (s)" Aug 13 00:56:12.033545 env[1203]: time="2025-08-13T00:56:12.033480118Z" level=info msg="Stop container \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\" with signal terminated" Aug 13 00:56:12.038776 systemd[1]: cri-containerd-a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347.scope: Deactivated successfully. Aug 13 00:56:12.041010 systemd-networkd[1020]: lxc_health: Link DOWN Aug 13 00:56:12.041017 systemd-networkd[1020]: lxc_health: Lost carrier Aug 13 00:56:12.059185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347-rootfs.mount: Deactivated successfully. Aug 13 00:56:12.076522 env[1203]: time="2025-08-13T00:56:12.076450483Z" level=info msg="shim disconnected" id=a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347 Aug 13 00:56:12.076522 env[1203]: time="2025-08-13T00:56:12.076517210Z" level=warning msg="cleaning up after shim disconnected" id=a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347 namespace=k8s.io Aug 13 00:56:12.076522 env[1203]: time="2025-08-13T00:56:12.076531296Z" level=info msg="cleaning up dead shim" Aug 13 00:56:12.078127 systemd[1]: cri-containerd-4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605.scope: Deactivated successfully. Aug 13 00:56:12.078369 systemd[1]: cri-containerd-4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605.scope: Consumed 6.175s CPU time. Aug 13 00:56:12.083673 env[1203]: time="2025-08-13T00:56:12.083610900Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3638 runtime=io.containerd.runc.v2\n" Aug 13 00:56:12.087327 env[1203]: time="2025-08-13T00:56:12.087282421Z" level=info msg="StopContainer for \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\" returns successfully" Aug 13 00:56:12.088114 env[1203]: time="2025-08-13T00:56:12.088085136Z" level=info msg="StopPodSandbox for \"99359e33d5990b9e2dd36decc8267659b4695da3d8762a9a7e299b71f5a66016\"" Aug 13 00:56:12.088191 env[1203]: time="2025-08-13T00:56:12.088170198Z" level=info msg="Container to stop \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:12.090746 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99359e33d5990b9e2dd36decc8267659b4695da3d8762a9a7e299b71f5a66016-shm.mount: Deactivated successfully. Aug 13 00:56:12.096512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605-rootfs.mount: Deactivated successfully. Aug 13 00:56:12.102905 systemd[1]: cri-containerd-99359e33d5990b9e2dd36decc8267659b4695da3d8762a9a7e299b71f5a66016.scope: Deactivated successfully. Aug 13 00:56:12.103602 env[1203]: time="2025-08-13T00:56:12.103546755Z" level=info msg="shim disconnected" id=4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605 Aug 13 00:56:12.103695 env[1203]: time="2025-08-13T00:56:12.103604245Z" level=warning msg="cleaning up after shim disconnected" id=4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605 namespace=k8s.io Aug 13 00:56:12.103695 env[1203]: time="2025-08-13T00:56:12.103615887Z" level=info msg="cleaning up dead shim" Aug 13 00:56:12.112156 env[1203]: time="2025-08-13T00:56:12.112099687Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3669 runtime=io.containerd.runc.v2\n" Aug 13 00:56:12.114973 env[1203]: time="2025-08-13T00:56:12.114913770Z" level=info msg="StopContainer for \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\" returns successfully" Aug 13 00:56:12.115554 env[1203]: time="2025-08-13T00:56:12.115515222Z" level=info msg="StopPodSandbox for \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\"" Aug 13 00:56:12.115638 env[1203]: time="2025-08-13T00:56:12.115590294Z" level=info msg="Container to stop \"cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:12.115638 env[1203]: time="2025-08-13T00:56:12.115604662Z" level=info msg="Container to stop \"7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:12.115638 env[1203]: time="2025-08-13T00:56:12.115614501Z" level=info msg="Container to stop \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:12.115638 env[1203]: time="2025-08-13T00:56:12.115625030Z" level=info msg="Container to stop \"92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:12.115638 env[1203]: time="2025-08-13T00:56:12.115633787Z" level=info msg="Container to stop \"d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:56:12.117801 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2-shm.mount: Deactivated successfully. Aug 13 00:56:12.125829 systemd[1]: cri-containerd-fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2.scope: Deactivated successfully. Aug 13 00:56:12.127927 env[1203]: time="2025-08-13T00:56:12.127878107Z" level=info msg="shim disconnected" id=99359e33d5990b9e2dd36decc8267659b4695da3d8762a9a7e299b71f5a66016 Aug 13 00:56:12.128144 env[1203]: time="2025-08-13T00:56:12.128101541Z" level=warning msg="cleaning up after shim disconnected" id=99359e33d5990b9e2dd36decc8267659b4695da3d8762a9a7e299b71f5a66016 namespace=k8s.io Aug 13 00:56:12.128144 env[1203]: time="2025-08-13T00:56:12.128127090Z" level=info msg="cleaning up dead shim" Aug 13 00:56:12.135468 env[1203]: time="2025-08-13T00:56:12.135424847Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3701 runtime=io.containerd.runc.v2\n" Aug 13 00:56:12.136165 env[1203]: time="2025-08-13T00:56:12.136134565Z" level=info msg="TearDown network for sandbox \"99359e33d5990b9e2dd36decc8267659b4695da3d8762a9a7e299b71f5a66016\" successfully" Aug 13 00:56:12.136165 env[1203]: time="2025-08-13T00:56:12.136160525Z" level=info msg="StopPodSandbox for \"99359e33d5990b9e2dd36decc8267659b4695da3d8762a9a7e299b71f5a66016\" returns successfully" Aug 13 00:56:12.153489 env[1203]: time="2025-08-13T00:56:12.153431680Z" level=info msg="shim disconnected" id=fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2 Aug 13 00:56:12.153841 env[1203]: time="2025-08-13T00:56:12.153798858Z" level=warning msg="cleaning up after shim disconnected" id=fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2 namespace=k8s.io Aug 13 00:56:12.153841 env[1203]: time="2025-08-13T00:56:12.153822682Z" level=info msg="cleaning up dead shim" Aug 13 00:56:12.164263 env[1203]: time="2025-08-13T00:56:12.164191481Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3725 runtime=io.containerd.runc.v2\n" Aug 13 00:56:12.164727 env[1203]: time="2025-08-13T00:56:12.164683676Z" level=info msg="TearDown network for sandbox \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\" successfully" Aug 13 00:56:12.164727 env[1203]: time="2025-08-13T00:56:12.164711509Z" level=info msg="StopPodSandbox for \"fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2\" returns successfully" Aug 13 00:56:12.304327 kubelet[1936]: I0813 00:56:12.303562 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-lib-modules\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.304327 kubelet[1936]: I0813 00:56:12.303620 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/85e7325e-7501-4104-81f7-1173751973ec-clustermesh-secrets\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.304327 kubelet[1936]: I0813 00:56:12.303640 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-cilium-run\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.304327 kubelet[1936]: I0813 00:56:12.303657 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf92g\" (UniqueName: \"kubernetes.io/projected/bfa68123-8c71-4d6b-a45c-875b89f6bf9d-kube-api-access-wf92g\") pod \"bfa68123-8c71-4d6b-a45c-875b89f6bf9d\" (UID: \"bfa68123-8c71-4d6b-a45c-875b89f6bf9d\") " Aug 13 00:56:12.304327 kubelet[1936]: I0813 00:56:12.303677 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85e7325e-7501-4104-81f7-1173751973ec-cilium-config-path\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.304327 kubelet[1936]: I0813 00:56:12.303691 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-hostproc\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.304652 kubelet[1936]: I0813 00:56:12.303709 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfa68123-8c71-4d6b-a45c-875b89f6bf9d-cilium-config-path\") pod \"bfa68123-8c71-4d6b-a45c-875b89f6bf9d\" (UID: \"bfa68123-8c71-4d6b-a45c-875b89f6bf9d\") " Aug 13 00:56:12.304652 kubelet[1936]: I0813 00:56:12.303709 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:12.304652 kubelet[1936]: I0813 00:56:12.303743 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:12.304652 kubelet[1936]: I0813 00:56:12.303789 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:12.304652 kubelet[1936]: I0813 00:56:12.303724 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-etc-cni-netd\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.304820 kubelet[1936]: I0813 00:56:12.303817 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-cni-path\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.304820 kubelet[1936]: I0813 00:56:12.303835 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-cilium-cgroup\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.304820 kubelet[1936]: I0813 00:56:12.303851 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/85e7325e-7501-4104-81f7-1173751973ec-hubble-tls\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.304820 kubelet[1936]: I0813 00:56:12.303863 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-host-proc-sys-kernel\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.304820 kubelet[1936]: I0813 00:56:12.303876 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-bpf-maps\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.304820 kubelet[1936]: I0813 00:56:12.303888 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-xtables-lock\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.305129 kubelet[1936]: I0813 00:56:12.303903 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-host-proc-sys-net\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.305129 kubelet[1936]: I0813 00:56:12.303921 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm4z6\" (UniqueName: \"kubernetes.io/projected/85e7325e-7501-4104-81f7-1173751973ec-kube-api-access-pm4z6\") pod \"85e7325e-7501-4104-81f7-1173751973ec\" (UID: \"85e7325e-7501-4104-81f7-1173751973ec\") " Aug 13 00:56:12.305129 kubelet[1936]: I0813 00:56:12.303956 1936 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.305129 kubelet[1936]: I0813 00:56:12.303963 1936 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.305129 kubelet[1936]: I0813 00:56:12.303971 1936 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.305129 kubelet[1936]: I0813 00:56:12.304103 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-hostproc" (OuterVolumeSpecName: "hostproc") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:12.305288 kubelet[1936]: I0813 00:56:12.304865 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-cni-path" (OuterVolumeSpecName: "cni-path") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:12.305288 kubelet[1936]: I0813 00:56:12.304893 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:12.306317 kubelet[1936]: I0813 00:56:12.306286 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:12.306380 kubelet[1936]: I0813 00:56:12.306334 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:12.306380 kubelet[1936]: I0813 00:56:12.306360 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:12.306433 kubelet[1936]: I0813 00:56:12.306381 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:12.306611 kubelet[1936]: I0813 00:56:12.306587 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85e7325e-7501-4104-81f7-1173751973ec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:56:12.307644 kubelet[1936]: I0813 00:56:12.307615 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa68123-8c71-4d6b-a45c-875b89f6bf9d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bfa68123-8c71-4d6b-a45c-875b89f6bf9d" (UID: "bfa68123-8c71-4d6b-a45c-875b89f6bf9d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:56:12.308794 kubelet[1936]: I0813 00:56:12.308767 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e7325e-7501-4104-81f7-1173751973ec-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:56:12.309000 kubelet[1936]: I0813 00:56:12.308963 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e7325e-7501-4104-81f7-1173751973ec-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:56:12.310617 kubelet[1936]: I0813 00:56:12.310570 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e7325e-7501-4104-81f7-1173751973ec-kube-api-access-pm4z6" (OuterVolumeSpecName: "kube-api-access-pm4z6") pod "85e7325e-7501-4104-81f7-1173751973ec" (UID: "85e7325e-7501-4104-81f7-1173751973ec"). InnerVolumeSpecName "kube-api-access-pm4z6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:56:12.310740 kubelet[1936]: I0813 00:56:12.310714 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa68123-8c71-4d6b-a45c-875b89f6bf9d-kube-api-access-wf92g" (OuterVolumeSpecName: "kube-api-access-wf92g") pod "bfa68123-8c71-4d6b-a45c-875b89f6bf9d" (UID: "bfa68123-8c71-4d6b-a45c-875b89f6bf9d"). InnerVolumeSpecName "kube-api-access-wf92g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:56:12.405087 kubelet[1936]: I0813 00:56:12.405014 1936 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/85e7325e-7501-4104-81f7-1173751973ec-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.405087 kubelet[1936]: I0813 00:56:12.405075 1936 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.405087 kubelet[1936]: I0813 00:56:12.405085 1936 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.405087 kubelet[1936]: I0813 00:56:12.405097 1936 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.405378 kubelet[1936]: I0813 00:56:12.405112 1936 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.405378 kubelet[1936]: I0813 00:56:12.405121 1936 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.405378 kubelet[1936]: I0813 00:56:12.405130 1936 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.405378 kubelet[1936]: I0813 00:56:12.405139 1936 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pm4z6\" (UniqueName: \"kubernetes.io/projected/85e7325e-7501-4104-81f7-1173751973ec-kube-api-access-pm4z6\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.405378 kubelet[1936]: I0813 00:56:12.405149 1936 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/85e7325e-7501-4104-81f7-1173751973ec-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.405378 kubelet[1936]: I0813 00:56:12.405159 1936 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfa68123-8c71-4d6b-a45c-875b89f6bf9d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.405378 kubelet[1936]: I0813 00:56:12.405168 1936 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wf92g\" (UniqueName: \"kubernetes.io/projected/bfa68123-8c71-4d6b-a45c-875b89f6bf9d-kube-api-access-wf92g\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.405378 kubelet[1936]: I0813 00:56:12.405177 1936 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85e7325e-7501-4104-81f7-1173751973ec-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.405574 kubelet[1936]: I0813 00:56:12.405186 1936 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/85e7325e-7501-4104-81f7-1173751973ec-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:12.673684 kubelet[1936]: I0813 00:56:12.673638 1936 scope.go:117] "RemoveContainer" containerID="a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347" Aug 13 00:56:12.675557 env[1203]: time="2025-08-13T00:56:12.675523522Z" level=info msg="RemoveContainer for \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\"" Aug 13 00:56:12.677805 systemd[1]: Removed slice kubepods-besteffort-podbfa68123_8c71_4d6b_a45c_875b89f6bf9d.slice. Aug 13 00:56:12.680188 env[1203]: time="2025-08-13T00:56:12.680152933Z" level=info msg="RemoveContainer for \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\" returns successfully" Aug 13 00:56:12.680533 kubelet[1936]: I0813 00:56:12.680500 1936 scope.go:117] "RemoveContainer" containerID="a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347" Aug 13 00:56:12.680823 env[1203]: time="2025-08-13T00:56:12.680712946Z" level=error msg="ContainerStatus for \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\": not found" Aug 13 00:56:12.681154 kubelet[1936]: E0813 00:56:12.681101 1936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\": not found" containerID="a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347" Aug 13 00:56:12.681303 kubelet[1936]: I0813 00:56:12.681167 1936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347"} err="failed to get container status \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\": rpc error: code = NotFound desc = an error occurred when try to find container \"a06ee225f772864eda9b49585eaae9e58fff28da4ab539a4c99ba48dcd5c3347\": not found" Aug 13 00:56:12.681303 kubelet[1936]: I0813 00:56:12.681288 1936 scope.go:117] "RemoveContainer" containerID="4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605" Aug 13 00:56:12.681568 systemd[1]: Removed slice kubepods-burstable-pod85e7325e_7501_4104_81f7_1173751973ec.slice. Aug 13 00:56:12.681667 systemd[1]: kubepods-burstable-pod85e7325e_7501_4104_81f7_1173751973ec.slice: Consumed 6.283s CPU time. Aug 13 00:56:12.682197 env[1203]: time="2025-08-13T00:56:12.682165093Z" level=info msg="RemoveContainer for \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\"" Aug 13 00:56:12.685530 env[1203]: time="2025-08-13T00:56:12.685475379Z" level=info msg="RemoveContainer for \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\" returns successfully" Aug 13 00:56:12.685703 kubelet[1936]: I0813 00:56:12.685684 1936 scope.go:117] "RemoveContainer" containerID="7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7" Aug 13 00:56:12.686645 env[1203]: time="2025-08-13T00:56:12.686615424Z" level=info msg="RemoveContainer for \"7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7\"" Aug 13 00:56:12.690595 env[1203]: time="2025-08-13T00:56:12.690549926Z" level=info msg="RemoveContainer for \"7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7\" returns successfully" Aug 13 00:56:12.690894 kubelet[1936]: I0813 00:56:12.690802 1936 scope.go:117] "RemoveContainer" containerID="cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9" Aug 13 00:56:12.693381 env[1203]: time="2025-08-13T00:56:12.693091972Z" level=info msg="RemoveContainer for \"cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9\"" Aug 13 00:56:12.696209 env[1203]: time="2025-08-13T00:56:12.696162353Z" level=info msg="RemoveContainer for \"cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9\" returns successfully" Aug 13 00:56:12.696371 kubelet[1936]: I0813 00:56:12.696347 1936 scope.go:117] "RemoveContainer" containerID="d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196" Aug 13 00:56:12.697527 env[1203]: time="2025-08-13T00:56:12.697478892Z" level=info msg="RemoveContainer for \"d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196\"" Aug 13 00:56:12.701287 env[1203]: time="2025-08-13T00:56:12.701239814Z" level=info msg="RemoveContainer for \"d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196\" returns successfully" Aug 13 00:56:12.701478 kubelet[1936]: I0813 00:56:12.701444 1936 scope.go:117] "RemoveContainer" containerID="92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1" Aug 13 00:56:12.702487 env[1203]: time="2025-08-13T00:56:12.702462295Z" level=info msg="RemoveContainer for \"92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1\"" Aug 13 00:56:12.705517 env[1203]: time="2025-08-13T00:56:12.705494703Z" level=info msg="RemoveContainer for \"92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1\" returns successfully" Aug 13 00:56:12.705739 kubelet[1936]: I0813 00:56:12.705709 1936 scope.go:117] "RemoveContainer" containerID="4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605" Aug 13 00:56:12.706111 env[1203]: time="2025-08-13T00:56:12.706041702Z" level=error msg="ContainerStatus for \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\": not found" Aug 13 00:56:12.706822 kubelet[1936]: E0813 00:56:12.706286 1936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\": not found" containerID="4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605" Aug 13 00:56:12.706822 kubelet[1936]: I0813 00:56:12.706322 1936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605"} err="failed to get container status \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f25abbc90ebb895f5353bfa86810b2fbac01834fe1a9ff34e4ac1232f03c605\": not found" Aug 13 00:56:12.706822 kubelet[1936]: I0813 00:56:12.706350 1936 scope.go:117] "RemoveContainer" containerID="7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7" Aug 13 00:56:12.707006 env[1203]: time="2025-08-13T00:56:12.706683731Z" level=error msg="ContainerStatus for \"7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7\": not found" Aug 13 00:56:12.707057 kubelet[1936]: E0813 00:56:12.706994 1936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7\": not found" containerID="7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7" Aug 13 00:56:12.707090 kubelet[1936]: I0813 00:56:12.707070 1936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7"} err="failed to get container status \"7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7\": rpc error: code = NotFound desc = an error occurred when try to find container \"7fdfb25dc25a5beaf04757da71be05daa6a988a9fc7e286c30cf0f8c1c11cbc7\": not found" Aug 13 00:56:12.707119 kubelet[1936]: I0813 00:56:12.707095 1936 scope.go:117] "RemoveContainer" containerID="cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9" Aug 13 00:56:12.707479 env[1203]: time="2025-08-13T00:56:12.707371357Z" level=error msg="ContainerStatus for \"cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9\": not found" Aug 13 00:56:12.707588 kubelet[1936]: E0813 00:56:12.707496 1936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9\": not found" containerID="cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9" Aug 13 00:56:12.707588 kubelet[1936]: I0813 00:56:12.707516 1936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9"} err="failed to get container status \"cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc96a18793574c33bdc55d1d33c0cd29a2222c3c2ee525c8e17dc6868f4029b9\": not found" Aug 13 00:56:12.707588 kubelet[1936]: I0813 00:56:12.707532 1936 scope.go:117] "RemoveContainer" containerID="d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196" Aug 13 00:56:12.707789 env[1203]: time="2025-08-13T00:56:12.707696555Z" level=error msg="ContainerStatus for \"d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196\": not found" Aug 13 00:56:12.707870 kubelet[1936]: E0813 00:56:12.707844 1936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196\": not found" containerID="d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196" Aug 13 00:56:12.707932 kubelet[1936]: I0813 00:56:12.707868 1936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196"} err="failed to get container status \"d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8bbad4ae3bdb4d724e9aff7653e3c3a847f5a82f663724b558fc9083d801196\": not found" Aug 13 00:56:12.707932 kubelet[1936]: I0813 00:56:12.707886 1936 scope.go:117] "RemoveContainer" containerID="92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1" Aug 13 00:56:12.708086 env[1203]: time="2025-08-13T00:56:12.708025909Z" level=error msg="ContainerStatus for \"92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1\": not found" Aug 13 00:56:12.708155 kubelet[1936]: E0813 00:56:12.708136 1936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1\": not found" containerID="92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1" Aug 13 00:56:12.708190 kubelet[1936]: I0813 00:56:12.708155 1936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1"} err="failed to get container status \"92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"92f39850297bd5f6fee273ed05626225c6b9471a169f3804a01655704fcad6c1\": not found" Aug 13 00:56:13.009352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99359e33d5990b9e2dd36decc8267659b4695da3d8762a9a7e299b71f5a66016-rootfs.mount: Deactivated successfully. Aug 13 00:56:13.009482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fde0efe6814136cfbf1122c9e0d90194fa6d0e6ddfb423e9f546337a59e5a0e2-rootfs.mount: Deactivated successfully. Aug 13 00:56:13.009579 systemd[1]: var-lib-kubelet-pods-85e7325e\x2d7501\x2d4104\x2d81f7\x2d1173751973ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpm4z6.mount: Deactivated successfully. Aug 13 00:56:13.009672 systemd[1]: var-lib-kubelet-pods-bfa68123\x2d8c71\x2d4d6b\x2da45c\x2d875b89f6bf9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwf92g.mount: Deactivated successfully. Aug 13 00:56:13.009778 systemd[1]: var-lib-kubelet-pods-85e7325e\x2d7501\x2d4104\x2d81f7\x2d1173751973ec-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:56:13.009856 systemd[1]: var-lib-kubelet-pods-85e7325e\x2d7501\x2d4104\x2d81f7\x2d1173751973ec-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:56:13.471427 kubelet[1936]: I0813 00:56:13.471374 1936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85e7325e-7501-4104-81f7-1173751973ec" path="/var/lib/kubelet/pods/85e7325e-7501-4104-81f7-1173751973ec/volumes" Aug 13 00:56:13.471979 kubelet[1936]: I0813 00:56:13.471949 1936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa68123-8c71-4d6b-a45c-875b89f6bf9d" path="/var/lib/kubelet/pods/bfa68123-8c71-4d6b-a45c-875b89f6bf9d/volumes" Aug 13 00:56:13.505908 kubelet[1936]: E0813 00:56:13.505857 1936 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:56:13.815379 sshd[3583]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:13.818553 systemd[1]: sshd@25-10.0.0.21:22-10.0.0.1:60894.service: Deactivated successfully. Aug 13 00:56:13.819262 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:56:13.819840 systemd-logind[1189]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:56:13.821353 systemd[1]: Started sshd@26-10.0.0.21:22-10.0.0.1:60910.service. Aug 13 00:56:13.822240 systemd-logind[1189]: Removed session 26. Aug 13 00:56:13.861859 sshd[3745]: Accepted publickey for core from 10.0.0.1 port 60910 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:56:13.862877 sshd[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:13.866744 systemd-logind[1189]: New session 27 of user core. Aug 13 00:56:13.867849 systemd[1]: Started session-27.scope. Aug 13 00:56:14.320793 sshd[3745]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:14.323360 systemd[1]: Started sshd@27-10.0.0.21:22-10.0.0.1:60912.service. Aug 13 00:56:14.326502 systemd[1]: sshd@26-10.0.0.21:22-10.0.0.1:60910.service: Deactivated successfully. Aug 13 00:56:14.327229 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:56:14.328188 systemd-logind[1189]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:56:14.329160 systemd-logind[1189]: Removed session 27. Aug 13 00:56:14.343818 kubelet[1936]: I0813 00:56:14.343748 1936 memory_manager.go:355] "RemoveStaleState removing state" podUID="bfa68123-8c71-4d6b-a45c-875b89f6bf9d" containerName="cilium-operator" Aug 13 00:56:14.343818 kubelet[1936]: I0813 00:56:14.343800 1936 memory_manager.go:355] "RemoveStaleState removing state" podUID="85e7325e-7501-4104-81f7-1173751973ec" containerName="cilium-agent" Aug 13 00:56:14.351648 systemd[1]: Created slice kubepods-burstable-podd3222d61_e1f2_46ba_b32c_3e131ced4893.slice. Aug 13 00:56:14.365342 sshd[3756]: Accepted publickey for core from 10.0.0.1 port 60912 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:56:14.366571 sshd[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:14.370895 systemd-logind[1189]: New session 28 of user core. Aug 13 00:56:14.371823 systemd[1]: Started session-28.scope. Aug 13 00:56:14.505572 sshd[3756]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:14.509489 systemd[1]: sshd@27-10.0.0.21:22-10.0.0.1:60912.service: Deactivated successfully. Aug 13 00:56:14.510298 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:56:14.511478 systemd-logind[1189]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:56:14.513913 systemd[1]: Started sshd@28-10.0.0.21:22-10.0.0.1:60924.service. Aug 13 00:56:14.515192 kubelet[1936]: I0813 00:56:14.515110 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-host-proc-sys-kernel\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515192 kubelet[1936]: I0813 00:56:14.515143 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnrgb\" (UniqueName: \"kubernetes.io/projected/d3222d61-e1f2-46ba-b32c-3e131ced4893-kube-api-access-qnrgb\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515192 kubelet[1936]: I0813 00:56:14.515160 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-etc-cni-netd\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515192 kubelet[1936]: I0813 00:56:14.515175 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-config-path\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515192 kubelet[1936]: I0813 00:56:14.515191 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-xtables-lock\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515500 kubelet[1936]: I0813 00:56:14.515206 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-bpf-maps\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515500 kubelet[1936]: I0813 00:56:14.515219 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3222d61-e1f2-46ba-b32c-3e131ced4893-clustermesh-secrets\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515500 kubelet[1936]: I0813 00:56:14.515276 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3222d61-e1f2-46ba-b32c-3e131ced4893-hubble-tls\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515500 kubelet[1936]: I0813 00:56:14.515291 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-hostproc\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515500 kubelet[1936]: I0813 00:56:14.515306 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-cgroup\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515500 kubelet[1936]: I0813 00:56:14.515318 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-cni-path\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515645 kubelet[1936]: I0813 00:56:14.515329 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-ipsec-secrets\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515645 kubelet[1936]: I0813 00:56:14.515341 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-run\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515645 kubelet[1936]: I0813 00:56:14.515353 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-lib-modules\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.515645 kubelet[1936]: I0813 00:56:14.515365 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-host-proc-sys-net\") pod \"cilium-5qqh4\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " pod="kube-system/cilium-5qqh4" Aug 13 00:56:14.516705 systemd-logind[1189]: Removed session 28. Aug 13 00:56:14.533698 kubelet[1936]: E0813 00:56:14.533645 1936 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-qnrgb lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-5qqh4" podUID="d3222d61-e1f2-46ba-b32c-3e131ced4893" Aug 13 00:56:14.560723 sshd[3771]: Accepted publickey for core from 10.0.0.1 port 60924 ssh2: RSA SHA256:qgan5rMjZ6sYv4kBQbHPXcuGXLcxEJ8myXWtyGqiw0s Aug 13 00:56:14.562003 sshd[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:14.565987 systemd-logind[1189]: New session 29 of user core. Aug 13 00:56:14.567045 systemd[1]: Started session-29.scope. Aug 13 00:56:14.817125 kubelet[1936]: I0813 00:56:14.817018 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-xtables-lock\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817125 kubelet[1936]: I0813 00:56:14.817101 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-hostproc\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817125 kubelet[1936]: I0813 00:56:14.817122 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-run\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817125 kubelet[1936]: I0813 00:56:14.817139 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-host-proc-sys-net\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817389 kubelet[1936]: I0813 00:56:14.817163 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnrgb\" (UniqueName: \"kubernetes.io/projected/d3222d61-e1f2-46ba-b32c-3e131ced4893-kube-api-access-qnrgb\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817389 kubelet[1936]: I0813 00:56:14.817179 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-cgroup\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817389 kubelet[1936]: I0813 00:56:14.817165 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:14.817389 kubelet[1936]: I0813 00:56:14.817191 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-cni-path\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817389 kubelet[1936]: I0813 00:56:14.817224 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:14.817389 kubelet[1936]: I0813 00:56:14.817254 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-bpf-maps\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817527 kubelet[1936]: I0813 00:56:14.817233 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-cni-path" (OuterVolumeSpecName: "cni-path") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:14.817527 kubelet[1936]: I0813 00:56:14.817246 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:14.817527 kubelet[1936]: I0813 00:56:14.817285 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-config-path\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817527 kubelet[1936]: I0813 00:56:14.817287 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:14.817527 kubelet[1936]: I0813 00:56:14.817305 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:14.817643 kubelet[1936]: I0813 00:56:14.817316 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3222d61-e1f2-46ba-b32c-3e131ced4893-hubble-tls\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817643 kubelet[1936]: I0813 00:56:14.817333 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-lib-modules\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817643 kubelet[1936]: I0813 00:56:14.817351 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-etc-cni-netd\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817643 kubelet[1936]: I0813 00:56:14.817366 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3222d61-e1f2-46ba-b32c-3e131ced4893-clustermesh-secrets\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817643 kubelet[1936]: I0813 00:56:14.817382 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-ipsec-secrets\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817643 kubelet[1936]: I0813 00:56:14.817397 1936 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-host-proc-sys-kernel\") pod \"d3222d61-e1f2-46ba-b32c-3e131ced4893\" (UID: \"d3222d61-e1f2-46ba-b32c-3e131ced4893\") " Aug 13 00:56:14.817832 kubelet[1936]: I0813 00:56:14.817452 1936 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.817832 kubelet[1936]: I0813 00:56:14.817460 1936 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.817832 kubelet[1936]: I0813 00:56:14.817468 1936 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.817832 kubelet[1936]: I0813 00:56:14.817477 1936 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.817832 kubelet[1936]: I0813 00:56:14.817484 1936 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.817832 kubelet[1936]: I0813 00:56:14.817491 1936 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.817832 kubelet[1936]: I0813 00:56:14.817510 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:14.818126 kubelet[1936]: I0813 00:56:14.818105 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-hostproc" (OuterVolumeSpecName: "hostproc") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:14.818226 kubelet[1936]: I0813 00:56:14.818207 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:14.818321 kubelet[1936]: I0813 00:56:14.818304 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:56:14.819071 kubelet[1936]: I0813 00:56:14.819044 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:56:14.821032 kubelet[1936]: I0813 00:56:14.821009 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3222d61-e1f2-46ba-b32c-3e131ced4893-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:56:14.821216 kubelet[1936]: I0813 00:56:14.821181 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3222d61-e1f2-46ba-b32c-3e131ced4893-kube-api-access-qnrgb" (OuterVolumeSpecName: "kube-api-access-qnrgb") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "kube-api-access-qnrgb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:56:14.821466 systemd[1]: var-lib-kubelet-pods-d3222d61\x2de1f2\x2d46ba\x2db32c\x2d3e131ced4893-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqnrgb.mount: Deactivated successfully. Aug 13 00:56:14.822098 kubelet[1936]: I0813 00:56:14.822079 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3222d61-e1f2-46ba-b32c-3e131ced4893-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:56:14.822626 kubelet[1936]: I0813 00:56:14.822581 1936 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d3222d61-e1f2-46ba-b32c-3e131ced4893" (UID: "d3222d61-e1f2-46ba-b32c-3e131ced4893"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:56:14.918099 kubelet[1936]: I0813 00:56:14.918021 1936 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3222d61-e1f2-46ba-b32c-3e131ced4893-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.918099 kubelet[1936]: I0813 00:56:14.918078 1936 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.918099 kubelet[1936]: I0813 00:56:14.918088 1936 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.918099 kubelet[1936]: I0813 00:56:14.918098 1936 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3222d61-e1f2-46ba-b32c-3e131ced4893-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.918099 kubelet[1936]: I0813 00:56:14.918107 1936 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.918099 kubelet[1936]: I0813 00:56:14.918113 1936 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d3222d61-e1f2-46ba-b32c-3e131ced4893-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.918099 kubelet[1936]: I0813 00:56:14.918120 1936 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.918479 kubelet[1936]: I0813 00:56:14.918128 1936 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3222d61-e1f2-46ba-b32c-3e131ced4893-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:14.918479 kubelet[1936]: I0813 00:56:14.918135 1936 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qnrgb\" (UniqueName: \"kubernetes.io/projected/d3222d61-e1f2-46ba-b32c-3e131ced4893-kube-api-access-qnrgb\") on node \"localhost\" DevicePath \"\"" Aug 13 00:56:15.474990 systemd[1]: Removed slice kubepods-burstable-podd3222d61_e1f2_46ba_b32c_3e131ced4893.slice. Aug 13 00:56:15.620692 systemd[1]: var-lib-kubelet-pods-d3222d61\x2de1f2\x2d46ba\x2db32c\x2d3e131ced4893-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:56:15.620849 systemd[1]: var-lib-kubelet-pods-d3222d61\x2de1f2\x2d46ba\x2db32c\x2d3e131ced4893-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:56:15.620945 systemd[1]: var-lib-kubelet-pods-d3222d61\x2de1f2\x2d46ba\x2db32c\x2d3e131ced4893-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 00:56:15.767838 systemd[1]: Created slice kubepods-burstable-pod957cadef_2af9_4fd4_be1e_1aa6769765b9.slice. Aug 13 00:56:15.924384 kubelet[1936]: I0813 00:56:15.924297 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/957cadef-2af9-4fd4-be1e-1aa6769765b9-clustermesh-secrets\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.924384 kubelet[1936]: I0813 00:56:15.924364 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67zmn\" (UniqueName: \"kubernetes.io/projected/957cadef-2af9-4fd4-be1e-1aa6769765b9-kube-api-access-67zmn\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.924384 kubelet[1936]: I0813 00:56:15.924389 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/957cadef-2af9-4fd4-be1e-1aa6769765b9-cni-path\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.924823 kubelet[1936]: I0813 00:56:15.924462 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/957cadef-2af9-4fd4-be1e-1aa6769765b9-bpf-maps\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.924823 kubelet[1936]: I0813 00:56:15.924519 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/957cadef-2af9-4fd4-be1e-1aa6769765b9-cilium-cgroup\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.924823 kubelet[1936]: I0813 00:56:15.924547 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/957cadef-2af9-4fd4-be1e-1aa6769765b9-cilium-config-path\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.924823 kubelet[1936]: I0813 00:56:15.924564 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/957cadef-2af9-4fd4-be1e-1aa6769765b9-etc-cni-netd\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.924823 kubelet[1936]: I0813 00:56:15.924580 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/957cadef-2af9-4fd4-be1e-1aa6769765b9-host-proc-sys-net\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.924823 kubelet[1936]: I0813 00:56:15.924599 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/957cadef-2af9-4fd4-be1e-1aa6769765b9-hubble-tls\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.925002 kubelet[1936]: I0813 00:56:15.924619 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/957cadef-2af9-4fd4-be1e-1aa6769765b9-lib-modules\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.925002 kubelet[1936]: I0813 00:56:15.924639 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/957cadef-2af9-4fd4-be1e-1aa6769765b9-host-proc-sys-kernel\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.925002 kubelet[1936]: I0813 00:56:15.924659 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/957cadef-2af9-4fd4-be1e-1aa6769765b9-hostproc\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.925002 kubelet[1936]: I0813 00:56:15.924678 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/957cadef-2af9-4fd4-be1e-1aa6769765b9-xtables-lock\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.925002 kubelet[1936]: I0813 00:56:15.924736 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/957cadef-2af9-4fd4-be1e-1aa6769765b9-cilium-run\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:15.925002 kubelet[1936]: I0813 00:56:15.924796 1936 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/957cadef-2af9-4fd4-be1e-1aa6769765b9-cilium-ipsec-secrets\") pod \"cilium-tq9v8\" (UID: \"957cadef-2af9-4fd4-be1e-1aa6769765b9\") " pod="kube-system/cilium-tq9v8" Aug 13 00:56:16.071514 kubelet[1936]: E0813 00:56:16.070824 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:16.071661 env[1203]: time="2025-08-13T00:56:16.071408397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tq9v8,Uid:957cadef-2af9-4fd4-be1e-1aa6769765b9,Namespace:kube-system,Attempt:0,}" Aug 13 00:56:16.085281 env[1203]: time="2025-08-13T00:56:16.085196342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:56:16.085281 env[1203]: time="2025-08-13T00:56:16.085241438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:56:16.085281 env[1203]: time="2025-08-13T00:56:16.085253460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:56:16.085508 env[1203]: time="2025-08-13T00:56:16.085388707Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da1cd746e8b3a28f7e1726ed2e864407b5cd0442cda4c079746942f8cc7e9e34 pid=3801 runtime=io.containerd.runc.v2 Aug 13 00:56:16.095782 systemd[1]: Started cri-containerd-da1cd746e8b3a28f7e1726ed2e864407b5cd0442cda4c079746942f8cc7e9e34.scope. Aug 13 00:56:16.117072 env[1203]: time="2025-08-13T00:56:16.117006705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tq9v8,Uid:957cadef-2af9-4fd4-be1e-1aa6769765b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"da1cd746e8b3a28f7e1726ed2e864407b5cd0442cda4c079746942f8cc7e9e34\"" Aug 13 00:56:16.117976 kubelet[1936]: E0813 00:56:16.117733 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:16.120073 env[1203]: time="2025-08-13T00:56:16.120010594Z" level=info msg="CreateContainer within sandbox \"da1cd746e8b3a28f7e1726ed2e864407b5cd0442cda4c079746942f8cc7e9e34\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:56:16.133116 env[1203]: time="2025-08-13T00:56:16.133066169Z" level=info msg="CreateContainer within sandbox \"da1cd746e8b3a28f7e1726ed2e864407b5cd0442cda4c079746942f8cc7e9e34\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"990e83d40e506376b2a31220dd8a75268537b6b2095e9bce272b9e6abf9b93fe\"" Aug 13 00:56:16.133626 env[1203]: time="2025-08-13T00:56:16.133565206Z" level=info msg="StartContainer for \"990e83d40e506376b2a31220dd8a75268537b6b2095e9bce272b9e6abf9b93fe\"" Aug 13 00:56:16.153484 systemd[1]: Started cri-containerd-990e83d40e506376b2a31220dd8a75268537b6b2095e9bce272b9e6abf9b93fe.scope. Aug 13 00:56:16.159692 kubelet[1936]: I0813 00:56:16.159602 1936 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:56:16Z","lastTransitionTime":"2025-08-13T00:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:56:16.187655 env[1203]: time="2025-08-13T00:56:16.187607483Z" level=info msg="StartContainer for \"990e83d40e506376b2a31220dd8a75268537b6b2095e9bce272b9e6abf9b93fe\" returns successfully" Aug 13 00:56:16.198413 systemd[1]: cri-containerd-990e83d40e506376b2a31220dd8a75268537b6b2095e9bce272b9e6abf9b93fe.scope: Deactivated successfully. Aug 13 00:56:16.231107 env[1203]: time="2025-08-13T00:56:16.231039903Z" level=info msg="shim disconnected" id=990e83d40e506376b2a31220dd8a75268537b6b2095e9bce272b9e6abf9b93fe Aug 13 00:56:16.231305 env[1203]: time="2025-08-13T00:56:16.231109354Z" level=warning msg="cleaning up after shim disconnected" id=990e83d40e506376b2a31220dd8a75268537b6b2095e9bce272b9e6abf9b93fe namespace=k8s.io Aug 13 00:56:16.231305 env[1203]: time="2025-08-13T00:56:16.231125045Z" level=info msg="cleaning up dead shim" Aug 13 00:56:16.239109 env[1203]: time="2025-08-13T00:56:16.239044607Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3888 runtime=io.containerd.runc.v2\n" Aug 13 00:56:16.689425 kubelet[1936]: E0813 00:56:16.689371 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:16.691211 env[1203]: time="2025-08-13T00:56:16.691151711Z" level=info msg="CreateContainer within sandbox \"da1cd746e8b3a28f7e1726ed2e864407b5cd0442cda4c079746942f8cc7e9e34\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:56:16.704886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1319504041.mount: Deactivated successfully. Aug 13 00:56:16.707460 env[1203]: time="2025-08-13T00:56:16.707421184Z" level=info msg="CreateContainer within sandbox \"da1cd746e8b3a28f7e1726ed2e864407b5cd0442cda4c079746942f8cc7e9e34\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cdb983a830677e18172257ed30b3fa00eb4460e7a73363b934b60ea8ce056801\"" Aug 13 00:56:16.710683 env[1203]: time="2025-08-13T00:56:16.710638346Z" level=info msg="StartContainer for \"cdb983a830677e18172257ed30b3fa00eb4460e7a73363b934b60ea8ce056801\"" Aug 13 00:56:16.728125 systemd[1]: Started cri-containerd-cdb983a830677e18172257ed30b3fa00eb4460e7a73363b934b60ea8ce056801.scope. Aug 13 00:56:16.751526 env[1203]: time="2025-08-13T00:56:16.751467648Z" level=info msg="StartContainer for \"cdb983a830677e18172257ed30b3fa00eb4460e7a73363b934b60ea8ce056801\" returns successfully" Aug 13 00:56:16.755820 systemd[1]: cri-containerd-cdb983a830677e18172257ed30b3fa00eb4460e7a73363b934b60ea8ce056801.scope: Deactivated successfully. Aug 13 00:56:16.775272 env[1203]: time="2025-08-13T00:56:16.775209607Z" level=info msg="shim disconnected" id=cdb983a830677e18172257ed30b3fa00eb4460e7a73363b934b60ea8ce056801 Aug 13 00:56:16.775272 env[1203]: time="2025-08-13T00:56:16.775267797Z" level=warning msg="cleaning up after shim disconnected" id=cdb983a830677e18172257ed30b3fa00eb4460e7a73363b934b60ea8ce056801 namespace=k8s.io Aug 13 00:56:16.775272 env[1203]: time="2025-08-13T00:56:16.775277355Z" level=info msg="cleaning up dead shim" Aug 13 00:56:16.781621 env[1203]: time="2025-08-13T00:56:16.781573188Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3948 runtime=io.containerd.runc.v2\n" Aug 13 00:56:17.471780 kubelet[1936]: I0813 00:56:17.471732 1936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3222d61-e1f2-46ba-b32c-3e131ced4893" path="/var/lib/kubelet/pods/d3222d61-e1f2-46ba-b32c-3e131ced4893/volumes" Aug 13 00:56:17.620644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdb983a830677e18172257ed30b3fa00eb4460e7a73363b934b60ea8ce056801-rootfs.mount: Deactivated successfully. Aug 13 00:56:17.692894 kubelet[1936]: E0813 00:56:17.692844 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:17.694744 env[1203]: time="2025-08-13T00:56:17.694681242Z" level=info msg="CreateContainer within sandbox \"da1cd746e8b3a28f7e1726ed2e864407b5cd0442cda4c079746942f8cc7e9e34\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:56:17.714041 env[1203]: time="2025-08-13T00:56:17.713968802Z" level=info msg="CreateContainer within sandbox \"da1cd746e8b3a28f7e1726ed2e864407b5cd0442cda4c079746942f8cc7e9e34\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"52e9c287dfb47666c021cf475ddc1e146ea0a23ff578e4e59d4c0b055bf28495\"" Aug 13 00:56:17.714659 env[1203]: time="2025-08-13T00:56:17.714615859Z" level=info msg="StartContainer for \"52e9c287dfb47666c021cf475ddc1e146ea0a23ff578e4e59d4c0b055bf28495\"" Aug 13 00:56:17.734620 systemd[1]: Started cri-containerd-52e9c287dfb47666c021cf475ddc1e146ea0a23ff578e4e59d4c0b055bf28495.scope. Aug 13 00:56:17.766373 env[1203]: time="2025-08-13T00:56:17.766322321Z" level=info msg="StartContainer for \"52e9c287dfb47666c021cf475ddc1e146ea0a23ff578e4e59d4c0b055bf28495\" returns successfully" Aug 13 00:56:17.767475 systemd[1]: cri-containerd-52e9c287dfb47666c021cf475ddc1e146ea0a23ff578e4e59d4c0b055bf28495.scope: Deactivated successfully. Aug 13 00:56:17.794918 env[1203]: time="2025-08-13T00:56:17.794856808Z" level=info msg="shim disconnected" id=52e9c287dfb47666c021cf475ddc1e146ea0a23ff578e4e59d4c0b055bf28495 Aug 13 00:56:17.794918 env[1203]: time="2025-08-13T00:56:17.794916121Z" level=warning msg="cleaning up after shim disconnected" id=52e9c287dfb47666c021cf475ddc1e146ea0a23ff578e4e59d4c0b055bf28495 namespace=k8s.io Aug 13 00:56:17.794918 env[1203]: time="2025-08-13T00:56:17.794928655Z" level=info msg="cleaning up dead shim" Aug 13 00:56:17.810195 env[1203]: time="2025-08-13T00:56:17.810131889Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4004 runtime=io.containerd.runc.v2\n" Aug 13 00:56:18.470212 kubelet[1936]: E0813 00:56:18.470143 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:18.470212 kubelet[1936]: E0813 00:56:18.470226 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:18.507004 kubelet[1936]: E0813 00:56:18.506951 1936 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:56:18.620797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52e9c287dfb47666c021cf475ddc1e146ea0a23ff578e4e59d4c0b055bf28495-rootfs.mount: Deactivated successfully. Aug 13 00:56:18.697448 kubelet[1936]: E0813 00:56:18.697408 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:18.699077 env[1203]: time="2025-08-13T00:56:18.699028223Z" level=info msg="CreateContainer within sandbox \"da1cd746e8b3a28f7e1726ed2e864407b5cd0442cda4c079746942f8cc7e9e34\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:56:18.733393 env[1203]: time="2025-08-13T00:56:18.733223351Z" level=info msg="CreateContainer within sandbox \"da1cd746e8b3a28f7e1726ed2e864407b5cd0442cda4c079746942f8cc7e9e34\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2470b3e667d24dca2635f24f20265d8e326f57ea4a516dd80e7f7af9b4aac831\"" Aug 13 00:56:18.733992 env[1203]: time="2025-08-13T00:56:18.733945650Z" level=info msg="StartContainer for \"2470b3e667d24dca2635f24f20265d8e326f57ea4a516dd80e7f7af9b4aac831\"" Aug 13 00:56:18.757580 systemd[1]: Started cri-containerd-2470b3e667d24dca2635f24f20265d8e326f57ea4a516dd80e7f7af9b4aac831.scope. Aug 13 00:56:18.778327 systemd[1]: cri-containerd-2470b3e667d24dca2635f24f20265d8e326f57ea4a516dd80e7f7af9b4aac831.scope: Deactivated successfully. Aug 13 00:56:18.785543 env[1203]: time="2025-08-13T00:56:18.785464415Z" level=info msg="StartContainer for \"2470b3e667d24dca2635f24f20265d8e326f57ea4a516dd80e7f7af9b4aac831\" returns successfully" Aug 13 00:56:18.808295 env[1203]: time="2025-08-13T00:56:18.808218600Z" level=info msg="shim disconnected" id=2470b3e667d24dca2635f24f20265d8e326f57ea4a516dd80e7f7af9b4aac831 Aug 13 00:56:18.808295 env[1203]: time="2025-08-13T00:56:18.808288683Z" level=warning msg="cleaning up after shim disconnected" id=2470b3e667d24dca2635f24f20265d8e326f57ea4a516dd80e7f7af9b4aac831 namespace=k8s.io Aug 13 00:56:18.808295 env[1203]: time="2025-08-13T00:56:18.808304683Z" level=info msg="cleaning up dead shim" Aug 13 00:56:18.818192 env[1203]: time="2025-08-13T00:56:18.818136615Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:56:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4059 runtime=io.containerd.runc.v2\n" Aug 13 00:56:19.620631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2470b3e667d24dca2635f24f20265d8e326f57ea4a516dd80e7f7af9b4aac831-rootfs.mount: Deactivated successfully. Aug 13 00:56:19.701659 kubelet[1936]: E0813 00:56:19.701625 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:19.703593 env[1203]: time="2025-08-13T00:56:19.703537603Z" level=info msg="CreateContainer within sandbox \"da1cd746e8b3a28f7e1726ed2e864407b5cd0442cda4c079746942f8cc7e9e34\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:56:19.718995 env[1203]: time="2025-08-13T00:56:19.718936414Z" level=info msg="CreateContainer within sandbox \"da1cd746e8b3a28f7e1726ed2e864407b5cd0442cda4c079746942f8cc7e9e34\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"07993aade3182df844f4b9c745637079eaa1ae823a0506669f8628a2f68e3957\"" Aug 13 00:56:19.719983 env[1203]: time="2025-08-13T00:56:19.719943043Z" level=info msg="StartContainer for \"07993aade3182df844f4b9c745637079eaa1ae823a0506669f8628a2f68e3957\"" Aug 13 00:56:19.739144 systemd[1]: Started cri-containerd-07993aade3182df844f4b9c745637079eaa1ae823a0506669f8628a2f68e3957.scope. Aug 13 00:56:19.762052 env[1203]: time="2025-08-13T00:56:19.761995281Z" level=info msg="StartContainer for \"07993aade3182df844f4b9c745637079eaa1ae823a0506669f8628a2f68e3957\" returns successfully" Aug 13 00:56:20.022804 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:56:20.706468 kubelet[1936]: E0813 00:56:20.706420 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:20.723541 kubelet[1936]: I0813 00:56:20.723458 1936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tq9v8" podStartSLOduration=5.7234362050000005 podStartE2EDuration="5.723436205s" podCreationTimestamp="2025-08-13 00:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:56:20.723260031 +0000 UTC m=+97.363766346" watchObservedRunningTime="2025-08-13 00:56:20.723436205 +0000 UTC m=+97.363942510" Aug 13 00:56:22.072545 kubelet[1936]: E0813 00:56:22.072501 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:22.746991 systemd-networkd[1020]: lxc_health: Link UP Aug 13 00:56:22.757965 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:56:22.757703 systemd-networkd[1020]: lxc_health: Gained carrier Aug 13 00:56:24.072850 kubelet[1936]: E0813 00:56:24.072613 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:24.713816 kubelet[1936]: E0813 00:56:24.713744 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:24.748033 systemd-networkd[1020]: lxc_health: Gained IPv6LL Aug 13 00:56:25.715859 kubelet[1936]: E0813 00:56:25.715811 1936 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:56:27.189079 systemd[1]: run-containerd-runc-k8s.io-07993aade3182df844f4b9c745637079eaa1ae823a0506669f8628a2f68e3957-runc.BcGwua.mount: Deactivated successfully. Aug 13 00:56:29.320390 sshd[3771]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:29.323173 systemd[1]: sshd@28-10.0.0.21:22-10.0.0.1:60924.service: Deactivated successfully. Aug 13 00:56:29.323937 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:56:29.324586 systemd-logind[1189]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:56:29.325352 systemd-logind[1189]: Removed session 29.