Sep 13 00:53:59.850410 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:53:59.850430 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:59.850438 kernel: BIOS-provided physical RAM map: Sep 13 00:53:59.850444 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:53:59.850449 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:53:59.850454 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:53:59.850461 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 00:53:59.850466 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 00:53:59.850473 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:53:59.850478 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:53:59.850483 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:53:59.850489 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:53:59.850494 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:53:59.850499 kernel: NX (Execute Disable) protection: active Sep 13 00:53:59.850507 kernel: SMBIOS 2.8 present. Sep 13 00:53:59.850513 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 00:53:59.850519 kernel: Hypervisor detected: KVM Sep 13 00:53:59.850525 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:53:59.850531 kernel: kvm-clock: cpu 0, msr 2219f001, primary cpu clock Sep 13 00:53:59.850536 kernel: kvm-clock: using sched offset of 2419515451 cycles Sep 13 00:53:59.850543 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:53:59.850549 kernel: tsc: Detected 2794.750 MHz processor Sep 13 00:53:59.850555 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:53:59.850563 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:53:59.850568 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 00:53:59.850574 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:53:59.850598 kernel: Using GB pages for direct mapping Sep 13 00:53:59.850604 kernel: ACPI: Early table checksum verification disabled Sep 13 00:53:59.850610 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 00:53:59.850616 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:59.850622 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:59.850628 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:59.850635 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 00:53:59.850641 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:59.850647 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:59.850653 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:59.850659 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:53:59.850665 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 00:53:59.850671 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 00:53:59.850677 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 00:53:59.850687 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 00:53:59.850693 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 00:53:59.850699 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 00:53:59.850706 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 00:53:59.850712 kernel: No NUMA configuration found Sep 13 00:53:59.850718 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 00:53:59.850726 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 13 00:53:59.850734 kernel: Zone ranges: Sep 13 00:53:59.850741 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:53:59.850749 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 00:53:59.850756 kernel: Normal empty Sep 13 00:53:59.850762 kernel: Movable zone start for each node Sep 13 00:53:59.850768 kernel: Early memory node ranges Sep 13 00:53:59.850775 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:53:59.850781 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 00:53:59.850787 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 00:53:59.850796 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:53:59.850802 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:53:59.850808 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:53:59.850814 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:53:59.850821 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:53:59.850827 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:53:59.850833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:53:59.850840 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:53:59.850846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:53:59.850854 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:53:59.850860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:53:59.850866 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:53:59.850873 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:53:59.850879 kernel: TSC deadline timer available Sep 13 00:53:59.850886 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 13 00:53:59.850892 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:53:59.850921 kernel: kvm-guest: setup PV sched yield Sep 13 00:53:59.850932 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:53:59.850944 kernel: Booting paravirtualized kernel on KVM Sep 13 00:53:59.850951 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:53:59.850957 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:53:59.850964 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 13 00:53:59.850970 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 13 00:53:59.850976 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:53:59.850982 kernel: kvm-guest: setup async PF for cpu 0 Sep 13 00:53:59.850989 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Sep 13 00:53:59.850995 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:53:59.851002 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:53:59.851009 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 13 00:53:59.851015 kernel: Policy zone: DMA32 Sep 13 00:53:59.851022 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:59.851029 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:53:59.851036 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:53:59.851042 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:53:59.851049 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:53:59.851057 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 134796K reserved, 0K cma-reserved) Sep 13 00:53:59.851063 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:53:59.851070 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:53:59.851076 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:53:59.851082 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:53:59.851089 kernel: rcu: RCU event tracing is enabled. Sep 13 00:53:59.851096 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:53:59.851102 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:53:59.851109 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:53:59.851121 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:53:59.851128 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:53:59.851134 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:53:59.851152 kernel: random: crng init done Sep 13 00:53:59.851158 kernel: Console: colour VGA+ 80x25 Sep 13 00:53:59.851164 kernel: printk: console [ttyS0] enabled Sep 13 00:53:59.851171 kernel: ACPI: Core revision 20210730 Sep 13 00:53:59.851177 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:53:59.851184 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:53:59.851191 kernel: x2apic enabled Sep 13 00:53:59.851198 kernel: Switched APIC routing to physical x2apic. Sep 13 00:53:59.851204 kernel: kvm-guest: setup PV IPIs Sep 13 00:53:59.851210 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:53:59.851217 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 13 00:53:59.851223 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 13 00:53:59.851230 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:53:59.851236 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:53:59.851250 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:53:59.851262 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:53:59.851269 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:53:59.851276 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:53:59.851284 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:53:59.851290 kernel: active return thunk: retbleed_return_thunk Sep 13 00:53:59.851297 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:53:59.851303 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:53:59.851310 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:53:59.851317 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:53:59.851325 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:53:59.851332 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:53:59.851339 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:53:59.851345 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:53:59.851352 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:53:59.851359 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:53:59.851370 kernel: LSM: Security Framework initializing Sep 13 00:53:59.851388 kernel: SELinux: Initializing. Sep 13 00:53:59.851397 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:53:59.851404 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:53:59.851411 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:53:59.851418 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:53:59.851424 kernel: ... version: 0 Sep 13 00:53:59.851431 kernel: ... bit width: 48 Sep 13 00:53:59.851438 kernel: ... generic registers: 6 Sep 13 00:53:59.851445 kernel: ... value mask: 0000ffffffffffff Sep 13 00:53:59.851451 kernel: ... max period: 00007fffffffffff Sep 13 00:53:59.851459 kernel: ... fixed-purpose events: 0 Sep 13 00:53:59.851466 kernel: ... event mask: 000000000000003f Sep 13 00:53:59.851472 kernel: signal: max sigframe size: 1776 Sep 13 00:53:59.851479 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:53:59.851486 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:53:59.851492 kernel: x86: Booting SMP configuration: Sep 13 00:53:59.851499 kernel: .... node #0, CPUs: #1 Sep 13 00:53:59.851506 kernel: kvm-clock: cpu 1, msr 2219f041, secondary cpu clock Sep 13 00:53:59.851512 kernel: kvm-guest: setup async PF for cpu 1 Sep 13 00:53:59.851520 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Sep 13 00:53:59.851538 kernel: #2 Sep 13 00:53:59.851545 kernel: kvm-clock: cpu 2, msr 2219f081, secondary cpu clock Sep 13 00:53:59.851552 kernel: kvm-guest: setup async PF for cpu 2 Sep 13 00:53:59.851559 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Sep 13 00:53:59.851565 kernel: #3 Sep 13 00:53:59.851572 kernel: kvm-clock: cpu 3, msr 2219f0c1, secondary cpu clock Sep 13 00:53:59.851579 kernel: kvm-guest: setup async PF for cpu 3 Sep 13 00:53:59.851585 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Sep 13 00:53:59.851605 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:53:59.851612 kernel: smpboot: Max logical packages: 1 Sep 13 00:53:59.851618 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 13 00:53:59.851625 kernel: devtmpfs: initialized Sep 13 00:53:59.851632 kernel: x86/mm: Memory block size: 128MB Sep 13 00:53:59.851639 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:53:59.851646 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:53:59.851652 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:53:59.851673 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:53:59.851684 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:53:59.851692 kernel: audit: type=2000 audit(1757724840.515:1): state=initialized audit_enabled=0 res=1 Sep 13 00:53:59.851699 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:53:59.851706 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:53:59.851712 kernel: cpuidle: using governor menu Sep 13 00:53:59.851719 kernel: ACPI: bus type PCI registered Sep 13 00:53:59.851726 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:53:59.851744 kernel: dca service started, version 1.12.1 Sep 13 00:53:59.851751 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 00:53:59.851757 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 13 00:53:59.851766 kernel: PCI: Using configuration type 1 for base access Sep 13 00:53:59.851773 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:53:59.851780 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:53:59.851786 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:53:59.851804 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:53:59.851811 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:53:59.851818 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:53:59.851825 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:53:59.851831 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:53:59.851839 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:53:59.851857 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:53:59.851864 kernel: ACPI: Interpreter enabled Sep 13 00:53:59.851871 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:53:59.851878 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:53:59.851885 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:53:59.851892 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:53:59.851898 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:53:59.852040 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:53:59.852117 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:53:59.852227 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:53:59.852237 kernel: PCI host bridge to bus 0000:00 Sep 13 00:53:59.852332 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:53:59.852410 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:53:59.852484 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:53:59.852563 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:53:59.852637 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:53:59.852711 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:53:59.852799 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:53:59.852914 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 00:53:59.853018 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 13 00:53:59.853105 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 13 00:53:59.853216 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 13 00:53:59.853326 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 13 00:53:59.853422 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:53:59.853537 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:53:59.853608 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 13 00:53:59.853679 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 13 00:53:59.853750 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 00:53:59.853829 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:53:59.853899 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:53:59.853968 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 13 00:53:59.854035 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 00:53:59.854110 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:53:59.854193 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 13 00:53:59.854273 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 13 00:53:59.854339 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 00:53:59.854407 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 13 00:53:59.854480 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 00:53:59.854547 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:53:59.854620 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 00:53:59.854687 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 13 00:53:59.854757 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 13 00:53:59.854835 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 00:53:59.854901 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 00:53:59.854910 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:53:59.854917 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:53:59.854924 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:53:59.854931 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:53:59.854941 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:53:59.854947 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:53:59.854954 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:53:59.854961 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:53:59.854968 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:53:59.854974 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:53:59.854981 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:53:59.854987 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:53:59.854994 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:53:59.855002 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:53:59.855009 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:53:59.855015 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:53:59.855022 kernel: iommu: Default domain type: Translated Sep 13 00:53:59.855029 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:53:59.855095 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:53:59.855176 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:53:59.855252 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:53:59.855261 kernel: vgaarb: loaded Sep 13 00:53:59.855271 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:53:59.855278 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:53:59.855284 kernel: PTP clock support registered Sep 13 00:53:59.855291 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:53:59.855298 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:53:59.855304 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:53:59.855311 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 00:53:59.855318 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:53:59.855324 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:53:59.855332 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:53:59.855339 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:53:59.855346 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:53:59.855353 kernel: pnp: PnP ACPI init Sep 13 00:53:59.855427 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:53:59.855437 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:53:59.855444 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:53:59.855451 kernel: NET: Registered PF_INET protocol family Sep 13 00:53:59.855460 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:53:59.855466 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:53:59.855473 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:53:59.855480 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:53:59.855487 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 00:53:59.855493 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:53:59.855500 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:53:59.855507 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:53:59.855513 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:53:59.855521 kernel: NET: Registered PF_XDP protocol family Sep 13 00:53:59.855583 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:53:59.855644 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:53:59.855702 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:53:59.855762 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:53:59.855821 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:53:59.855882 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:53:59.855891 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:53:59.855900 kernel: Initialise system trusted keyrings Sep 13 00:53:59.855906 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:53:59.855913 kernel: Key type asymmetric registered Sep 13 00:53:59.855920 kernel: Asymmetric key parser 'x509' registered Sep 13 00:53:59.855926 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:53:59.855933 kernel: io scheduler mq-deadline registered Sep 13 00:53:59.855940 kernel: io scheduler kyber registered Sep 13 00:53:59.855946 kernel: io scheduler bfq registered Sep 13 00:53:59.855953 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:53:59.855962 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:53:59.855969 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:53:59.855975 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:53:59.855982 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:53:59.855989 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:53:59.855996 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:53:59.856002 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:53:59.856009 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:53:59.856078 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:53:59.856090 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:53:59.856164 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:53:59.856225 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:53:59 UTC (1757724839) Sep 13 00:53:59.856297 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:53:59.856306 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:53:59.856313 kernel: Segment Routing with IPv6 Sep 13 00:53:59.856320 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:53:59.856326 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:53:59.856335 kernel: Key type dns_resolver registered Sep 13 00:53:59.856342 kernel: IPI shorthand broadcast: enabled Sep 13 00:53:59.856348 kernel: sched_clock: Marking stable (392001627, 98736180)->(534188022, -43450215) Sep 13 00:53:59.856355 kernel: registered taskstats version 1 Sep 13 00:53:59.856362 kernel: Loading compiled-in X.509 certificates Sep 13 00:53:59.856369 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:53:59.856375 kernel: Key type .fscrypt registered Sep 13 00:53:59.856382 kernel: Key type fscrypt-provisioning registered Sep 13 00:53:59.856388 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:53:59.856396 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:53:59.856403 kernel: ima: No architecture policies found Sep 13 00:53:59.856410 kernel: clk: Disabling unused clocks Sep 13 00:53:59.856416 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:53:59.856423 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:53:59.856430 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:53:59.856436 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:53:59.856443 kernel: Run /init as init process Sep 13 00:53:59.856450 kernel: with arguments: Sep 13 00:53:59.856457 kernel: /init Sep 13 00:53:59.856464 kernel: with environment: Sep 13 00:53:59.856470 kernel: HOME=/ Sep 13 00:53:59.856477 kernel: TERM=linux Sep 13 00:53:59.856483 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:53:59.856492 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:53:59.856502 systemd[1]: Detected virtualization kvm. Sep 13 00:53:59.856509 systemd[1]: Detected architecture x86-64. Sep 13 00:53:59.856518 systemd[1]: Running in initrd. Sep 13 00:53:59.856525 systemd[1]: No hostname configured, using default hostname. Sep 13 00:53:59.856532 systemd[1]: Hostname set to . Sep 13 00:53:59.856539 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:53:59.856546 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:53:59.856553 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:53:59.856561 systemd[1]: Reached target cryptsetup.target. Sep 13 00:53:59.856568 systemd[1]: Reached target paths.target. Sep 13 00:53:59.856576 systemd[1]: Reached target slices.target. Sep 13 00:53:59.856590 systemd[1]: Reached target swap.target. Sep 13 00:53:59.856598 systemd[1]: Reached target timers.target. Sep 13 00:53:59.856606 systemd[1]: Listening on iscsid.socket. Sep 13 00:53:59.856613 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:53:59.856622 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:53:59.856629 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:53:59.856637 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:53:59.856644 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:53:59.856652 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:53:59.856659 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:53:59.856667 systemd[1]: Reached target sockets.target. Sep 13 00:53:59.856674 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:53:59.856682 systemd[1]: Finished network-cleanup.service. Sep 13 00:53:59.856690 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:53:59.856698 systemd[1]: Starting systemd-journald.service... Sep 13 00:53:59.856705 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:53:59.856712 systemd[1]: Starting systemd-resolved.service... Sep 13 00:53:59.856720 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:53:59.856727 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:53:59.856748 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:53:59.856756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:53:59.856767 systemd-journald[198]: Journal started Sep 13 00:53:59.856805 systemd-journald[198]: Runtime Journal (/run/log/journal/64ae95dede604290bfcc465aef24e717) is 6.0M, max 48.5M, 42.5M free. Sep 13 00:53:59.848752 systemd-modules-load[199]: Inserted module 'overlay' Sep 13 00:53:59.881410 systemd[1]: Started systemd-journald.service. Sep 13 00:53:59.881425 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:53:59.861065 systemd-resolved[200]: Positive Trust Anchors: Sep 13 00:53:59.861073 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:53:59.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.861100 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:53:59.892059 kernel: audit: type=1130 audit(1757724839.882:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.892072 kernel: audit: type=1130 audit(1757724839.882:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.892083 kernel: audit: type=1130 audit(1757724839.888:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.863268 systemd-resolved[200]: Defaulting to hostname 'linux'. Sep 13 00:53:59.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.882496 systemd[1]: Started systemd-resolved.service. Sep 13 00:53:59.904699 kernel: audit: type=1130 audit(1757724839.892:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.883004 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:53:59.889089 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:53:59.892497 systemd[1]: Reached target nss-lookup.target. Sep 13 00:53:59.904550 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:53:59.908686 systemd-modules-load[199]: Inserted module 'br_netfilter' Sep 13 00:53:59.909603 kernel: Bridge firewalling registered Sep 13 00:53:59.916193 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:53:59.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.917408 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:53:59.921722 kernel: audit: type=1130 audit(1757724839.916:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.925894 dracut-cmdline[215]: dracut-dracut-053 Sep 13 00:53:59.927813 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:53:59.937181 kernel: SCSI subsystem initialized Sep 13 00:53:59.949133 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:53:59.949226 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:53:59.949237 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:53:59.951798 systemd-modules-load[199]: Inserted module 'dm_multipath' Sep 13 00:53:59.952630 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:53:59.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.957160 kernel: audit: type=1130 audit(1757724839.953:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.957127 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:53:59.965018 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:53:59.969176 kernel: audit: type=1130 audit(1757724839.965:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:53:59.993166 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:54:00.009176 kernel: iscsi: registered transport (tcp) Sep 13 00:54:00.030381 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:54:00.030429 kernel: QLogic iSCSI HBA Driver Sep 13 00:54:00.061427 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:54:00.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:00.062573 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:54:00.067038 kernel: audit: type=1130 audit(1757724840.061:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:00.112189 kernel: raid6: avx2x4 gen() 29330 MB/s Sep 13 00:54:00.129172 kernel: raid6: avx2x4 xor() 7739 MB/s Sep 13 00:54:00.146176 kernel: raid6: avx2x2 gen() 26916 MB/s Sep 13 00:54:00.163170 kernel: raid6: avx2x2 xor() 17809 MB/s Sep 13 00:54:00.180188 kernel: raid6: avx2x1 gen() 26116 MB/s Sep 13 00:54:00.197194 kernel: raid6: avx2x1 xor() 15295 MB/s Sep 13 00:54:00.214196 kernel: raid6: sse2x4 gen() 14668 MB/s Sep 13 00:54:00.231204 kernel: raid6: sse2x4 xor() 7470 MB/s Sep 13 00:54:00.248190 kernel: raid6: sse2x2 gen() 16423 MB/s Sep 13 00:54:00.265185 kernel: raid6: sse2x2 xor() 9826 MB/s Sep 13 00:54:00.282184 kernel: raid6: sse2x1 gen() 12426 MB/s Sep 13 00:54:00.299526 kernel: raid6: sse2x1 xor() 7788 MB/s Sep 13 00:54:00.299603 kernel: raid6: using algorithm avx2x4 gen() 29330 MB/s Sep 13 00:54:00.299617 kernel: raid6: .... xor() 7739 MB/s, rmw enabled Sep 13 00:54:00.300176 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:54:00.312167 kernel: xor: automatically using best checksumming function avx Sep 13 00:54:00.404181 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:54:00.413038 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:54:00.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:00.414000 audit: BPF prog-id=7 op=LOAD Sep 13 00:54:00.417000 audit: BPF prog-id=8 op=LOAD Sep 13 00:54:00.417729 systemd[1]: Starting systemd-udevd.service... Sep 13 00:54:00.419116 kernel: audit: type=1130 audit(1757724840.413:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:00.431967 systemd-udevd[400]: Using default interface naming scheme 'v252'. Sep 13 00:54:00.436441 systemd[1]: Started systemd-udevd.service. Sep 13 00:54:00.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:00.437586 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:54:00.448919 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Sep 13 00:54:00.471781 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:54:00.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:00.473426 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:54:00.504497 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:54:00.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:00.548252 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:54:00.558164 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:54:00.558194 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:54:00.565818 kernel: AES CTR mode by8 optimization enabled Sep 13 00:54:00.565831 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:54:00.565840 kernel: GPT:9289727 != 19775487 Sep 13 00:54:00.565848 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:54:00.565857 kernel: GPT:9289727 != 19775487 Sep 13 00:54:00.565865 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:54:00.565877 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:54:00.567164 kernel: libata version 3.00 loaded. Sep 13 00:54:00.575198 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:54:00.592425 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:54:00.592441 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 00:54:00.592532 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:54:00.592610 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (453) Sep 13 00:54:00.592620 kernel: scsi host0: ahci Sep 13 00:54:00.592711 kernel: scsi host1: ahci Sep 13 00:54:00.592796 kernel: scsi host2: ahci Sep 13 00:54:00.592875 kernel: scsi host3: ahci Sep 13 00:54:00.592954 kernel: scsi host4: ahci Sep 13 00:54:00.593033 kernel: scsi host5: ahci Sep 13 00:54:00.593112 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 13 00:54:00.593123 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 13 00:54:00.593132 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 13 00:54:00.593155 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 13 00:54:00.593164 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 13 00:54:00.593173 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 13 00:54:00.584602 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:54:00.627524 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:54:00.628565 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:54:00.636763 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:54:00.641530 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:54:00.643160 systemd[1]: Starting disk-uuid.service... Sep 13 00:54:00.675137 disk-uuid[533]: Primary Header is updated. Sep 13 00:54:00.675137 disk-uuid[533]: Secondary Entries is updated. Sep 13 00:54:00.675137 disk-uuid[533]: Secondary Header is updated. Sep 13 00:54:00.679182 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:54:00.682172 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:54:00.906369 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:54:00.906419 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:54:00.906429 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:54:00.907174 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:54:00.908175 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:54:00.909168 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:54:00.910171 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:54:00.911553 kernel: ata3.00: applying bridge limits Sep 13 00:54:00.911568 kernel: ata3.00: configured for UDMA/100 Sep 13 00:54:00.912174 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:54:00.946168 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:54:00.963816 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:54:00.963834 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:54:01.683179 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:54:01.683318 disk-uuid[534]: The operation has completed successfully. Sep 13 00:54:01.708332 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:54:01.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:01.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:01.708414 systemd[1]: Finished disk-uuid.service. Sep 13 00:54:01.712992 systemd[1]: Starting verity-setup.service... Sep 13 00:54:01.724174 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 13 00:54:01.742323 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:54:01.744334 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:54:01.747557 systemd[1]: Finished verity-setup.service. Sep 13 00:54:01.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:01.804166 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:54:01.804439 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:54:01.804855 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:54:01.805846 systemd[1]: Starting ignition-setup.service... Sep 13 00:54:01.808177 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:54:01.818675 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:54:01.818712 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:54:01.818722 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:54:01.826392 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:54:01.834345 systemd[1]: Finished ignition-setup.service. Sep 13 00:54:01.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:01.836011 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:54:01.870644 ignition[659]: Ignition 2.14.0 Sep 13 00:54:01.870657 ignition[659]: Stage: fetch-offline Sep 13 00:54:01.870703 ignition[659]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:54:01.870710 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:54:01.870804 ignition[659]: parsed url from cmdline: "" Sep 13 00:54:01.870808 ignition[659]: no config URL provided Sep 13 00:54:01.870812 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:54:01.870819 ignition[659]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:54:01.870836 ignition[659]: op(1): [started] loading QEMU firmware config module Sep 13 00:54:01.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:01.877374 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:54:01.880000 audit: BPF prog-id=9 op=LOAD Sep 13 00:54:01.870843 ignition[659]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:54:01.875291 ignition[659]: op(1): [finished] loading QEMU firmware config module Sep 13 00:54:01.880813 systemd[1]: Starting systemd-networkd.service... Sep 13 00:54:01.918525 ignition[659]: parsing config with SHA512: f0d1f480575b1792497ea40ab7da73c864a4a2fee031aaa789ccb1ace27b27254966d0baf19e79679939964a30a110b01d1efa5592d4fb8d2f6d890d98837733 Sep 13 00:54:01.924346 unknown[659]: fetched base config from "system" Sep 13 00:54:01.924368 unknown[659]: fetched user config from "qemu" Sep 13 00:54:01.924797 ignition[659]: fetch-offline: fetch-offline passed Sep 13 00:54:01.924846 ignition[659]: Ignition finished successfully Sep 13 00:54:01.929966 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:54:01.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:01.938563 systemd-networkd[730]: lo: Link UP Sep 13 00:54:01.938572 systemd-networkd[730]: lo: Gained carrier Sep 13 00:54:01.938948 systemd-networkd[730]: Enumeration completed Sep 13 00:54:01.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:01.939036 systemd[1]: Started systemd-networkd.service. Sep 13 00:54:01.939140 systemd-networkd[730]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:54:01.940334 systemd-networkd[730]: eth0: Link UP Sep 13 00:54:01.940337 systemd-networkd[730]: eth0: Gained carrier Sep 13 00:54:01.941949 systemd[1]: Reached target network.target. Sep 13 00:54:01.944724 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:54:01.949040 systemd[1]: Starting ignition-kargs.service... Sep 13 00:54:01.950987 systemd[1]: Starting iscsiuio.service... Sep 13 00:54:01.951266 systemd-networkd[730]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:54:01.954739 systemd[1]: Started iscsiuio.service. Sep 13 00:54:01.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:01.955767 systemd[1]: Starting iscsid.service... Sep 13 00:54:01.958250 ignition[732]: Ignition 2.14.0 Sep 13 00:54:01.958258 ignition[732]: Stage: kargs Sep 13 00:54:01.959576 iscsid[742]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:54:01.959576 iscsid[742]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:54:01.959576 iscsid[742]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:54:01.959576 iscsid[742]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:54:01.959576 iscsid[742]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:54:01.959576 iscsid[742]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:54:01.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:01.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:01.958365 ignition[732]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:54:01.960252 systemd[1]: Started iscsid.service. Sep 13 00:54:01.958373 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:54:01.961843 systemd[1]: Finished ignition-kargs.service. Sep 13 00:54:01.959535 ignition[732]: kargs: kargs passed Sep 13 00:54:01.967883 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:54:01.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:01.959568 ignition[732]: Ignition finished successfully Sep 13 00:54:01.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:01.969282 systemd[1]: Starting ignition-disks.service... Sep 13 00:54:01.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:01.978446 ignition[744]: Ignition 2.14.0 Sep 13 00:54:01.977630 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:54:01.978454 ignition[744]: Stage: disks Sep 13 00:54:01.978732 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:54:01.978560 ignition[744]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:54:01.979127 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:54:01.978574 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:54:01.979459 systemd[1]: Reached target remote-fs.target. Sep 13 00:54:01.979437 ignition[744]: disks: disks passed Sep 13 00:54:01.980253 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:54:01.979468 ignition[744]: Ignition finished successfully Sep 13 00:54:01.980738 systemd[1]: Finished ignition-disks.service. Sep 13 00:54:01.980975 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:54:01.981070 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:54:01.981394 systemd[1]: Reached target local-fs.target. Sep 13 00:54:01.981556 systemd[1]: Reached target sysinit.target. Sep 13 00:54:01.981714 systemd[1]: Reached target basic.target. Sep 13 00:54:02.002758 systemd-fsck[764]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:54:01.986984 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:54:01.988521 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:54:02.007501 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:54:02.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:02.010138 systemd[1]: Mounting sysroot.mount... Sep 13 00:54:02.017169 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:54:02.017134 systemd[1]: Mounted sysroot.mount. Sep 13 00:54:02.018462 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:54:02.020685 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:54:02.022296 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:54:02.022325 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:54:02.022343 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:54:02.027442 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:54:02.029318 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:54:02.032979 initrd-setup-root[774]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:54:02.036178 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:54:02.039539 initrd-setup-root[790]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:54:02.043293 initrd-setup-root[798]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:54:02.067164 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:54:02.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:02.068170 systemd[1]: Starting ignition-mount.service... Sep 13 00:54:02.070080 systemd[1]: Starting sysroot-boot.service... Sep 13 00:54:02.073306 bash[815]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:54:02.080614 ignition[816]: INFO : Ignition 2.14.0 Sep 13 00:54:02.080614 ignition[816]: INFO : Stage: mount Sep 13 00:54:02.082362 ignition[816]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:54:02.082362 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:54:02.082362 ignition[816]: INFO : mount: mount passed Sep 13 00:54:02.082362 ignition[816]: INFO : Ignition finished successfully Sep 13 00:54:02.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:02.082399 systemd[1]: Finished ignition-mount.service. Sep 13 00:54:02.089538 systemd[1]: Finished sysroot-boot.service. Sep 13 00:54:02.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:02.754351 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:54:02.761862 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (826) Sep 13 00:54:02.761888 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:54:02.761898 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:54:02.762625 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:54:02.766287 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:54:02.768545 systemd[1]: Starting ignition-files.service... Sep 13 00:54:02.780358 ignition[846]: INFO : Ignition 2.14.0 Sep 13 00:54:02.780358 ignition[846]: INFO : Stage: files Sep 13 00:54:02.781908 ignition[846]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:54:02.781908 ignition[846]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:54:02.783929 ignition[846]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:54:02.785469 ignition[846]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:54:02.785469 ignition[846]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:54:02.789129 ignition[846]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:54:02.790558 ignition[846]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:54:02.791953 ignition[846]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:54:02.791953 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:54:02.791953 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:54:02.791292 unknown[846]: wrote ssh authorized keys file for user: core Sep 13 00:54:02.832365 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:54:03.441221 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:54:03.443264 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:54:03.443264 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:54:03.537160 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:54:03.622261 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:54:03.622261 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:54:03.625866 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:54:03.625866 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:54:03.625866 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:54:03.625866 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:54:03.625866 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:54:03.625866 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:54:03.625866 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:54:03.625866 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:54:03.625866 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:54:03.625866 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:54:03.625866 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:54:03.625866 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:54:03.625866 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:54:03.658279 systemd-networkd[730]: eth0: Gained IPv6LL Sep 13 00:54:03.853212 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:54:04.248035 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:54:04.248035 ignition[846]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:54:04.251869 ignition[846]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:54:04.251869 ignition[846]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:54:04.251869 ignition[846]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:54:04.251869 ignition[846]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 00:54:04.251869 ignition[846]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:54:04.251869 ignition[846]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:54:04.251869 ignition[846]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 00:54:04.251869 ignition[846]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:54:04.251869 ignition[846]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:54:04.251869 ignition[846]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:54:04.251869 ignition[846]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:54:04.277649 ignition[846]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:54:04.279515 ignition[846]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:54:04.279515 ignition[846]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:54:04.279515 ignition[846]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:54:04.279515 ignition[846]: INFO : files: files passed Sep 13 00:54:04.279515 ignition[846]: INFO : Ignition finished successfully Sep 13 00:54:04.304095 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 13 00:54:04.304127 kernel: audit: type=1130 audit(1757724844.280:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.304171 kernel: audit: type=1130 audit(1757724844.292:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.304186 kernel: audit: type=1130 audit(1757724844.296:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.304199 kernel: audit: type=1131 audit(1757724844.296:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.279351 systemd[1]: Finished ignition-files.service. Sep 13 00:54:04.281155 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:54:04.286659 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:54:04.309169 initrd-setup-root-after-ignition[869]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 13 00:54:04.287526 systemd[1]: Starting ignition-quench.service... Sep 13 00:54:04.311517 initrd-setup-root-after-ignition[871]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:54:04.289454 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:54:04.292354 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:54:04.292434 systemd[1]: Finished ignition-quench.service. Sep 13 00:54:04.296791 systemd[1]: Reached target ignition-complete.target. Sep 13 00:54:04.324325 kernel: audit: type=1130 audit(1757724844.317:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.324343 kernel: audit: type=1131 audit(1757724844.317:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.304792 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:54:04.316207 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:54:04.316293 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:54:04.317317 systemd[1]: Reached target initrd-fs.target. Sep 13 00:54:04.324324 systemd[1]: Reached target initrd.target. Sep 13 00:54:04.325049 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:54:04.325662 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:54:04.334813 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:54:04.339653 kernel: audit: type=1130 audit(1757724844.335:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.336162 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:54:04.343957 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:54:04.344819 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:54:04.346338 systemd[1]: Stopped target timers.target. Sep 13 00:54:04.347806 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:54:04.353700 kernel: audit: type=1131 audit(1757724844.349:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.347892 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:54:04.349353 systemd[1]: Stopped target initrd.target. Sep 13 00:54:04.353805 systemd[1]: Stopped target basic.target. Sep 13 00:54:04.355452 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:54:04.357089 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:54:04.358755 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:54:04.360596 systemd[1]: Stopped target remote-fs.target. Sep 13 00:54:04.362332 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:54:04.364175 systemd[1]: Stopped target sysinit.target. Sep 13 00:54:04.365751 systemd[1]: Stopped target local-fs.target. Sep 13 00:54:04.367439 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:54:04.369072 systemd[1]: Stopped target swap.target. Sep 13 00:54:04.376530 kernel: audit: type=1131 audit(1757724844.372:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.370604 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:54:04.370726 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:54:04.382471 kernel: audit: type=1131 audit(1757724844.378:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.372423 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:54:04.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.376568 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:54:04.376651 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:54:04.378329 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:54:04.378415 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:54:04.382577 systemd[1]: Stopped target paths.target. Sep 13 00:54:04.383954 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:54:04.387182 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:54:04.388357 systemd[1]: Stopped target slices.target. Sep 13 00:54:04.389702 systemd[1]: Stopped target sockets.target. Sep 13 00:54:04.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.391415 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:54:04.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.391477 systemd[1]: Closed iscsid.socket. Sep 13 00:54:04.392889 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:54:04.392974 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:54:04.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.394499 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:54:04.403774 ignition[886]: INFO : Ignition 2.14.0 Sep 13 00:54:04.403774 ignition[886]: INFO : Stage: umount Sep 13 00:54:04.403774 ignition[886]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:54:04.403774 ignition[886]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:54:04.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.394578 systemd[1]: Stopped ignition-files.service. Sep 13 00:54:04.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.410934 ignition[886]: INFO : umount: umount passed Sep 13 00:54:04.410934 ignition[886]: INFO : Ignition finished successfully Sep 13 00:54:04.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.396523 systemd[1]: Stopping ignition-mount.service... Sep 13 00:54:04.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.397527 systemd[1]: Stopping iscsiuio.service... Sep 13 00:54:04.398577 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:54:04.398693 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:54:04.401187 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:54:04.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.402012 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:54:04.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.402154 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:54:04.403855 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:54:04.403962 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:54:04.407616 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:54:04.407686 systemd[1]: Stopped iscsiuio.service. Sep 13 00:54:04.409800 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:54:04.409865 systemd[1]: Stopped ignition-mount.service. Sep 13 00:54:04.411868 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:54:04.411927 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:54:04.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.414454 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:54:04.415152 systemd[1]: Stopped target network.target. Sep 13 00:54:04.416073 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:54:04.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.416099 systemd[1]: Closed iscsiuio.socket. Sep 13 00:54:04.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.417643 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:54:04.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.417673 systemd[1]: Stopped ignition-disks.service. Sep 13 00:54:04.419354 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:54:04.419420 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:54:04.420821 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:54:04.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.420852 systemd[1]: Stopped ignition-setup.service. Sep 13 00:54:04.421736 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:54:04.423453 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:54:04.429182 systemd-networkd[730]: eth0: DHCPv6 lease lost Sep 13 00:54:04.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.450000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:54:04.430046 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:54:04.451000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:54:04.430124 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:54:04.433187 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:54:04.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.433215 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:54:04.435270 systemd[1]: Stopping network-cleanup.service... Sep 13 00:54:04.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.436020 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:54:04.436057 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:54:04.437844 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:54:04.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.437876 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:54:04.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.439380 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:54:04.439410 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:54:04.441344 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:54:04.444519 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:54:04.444886 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:54:04.444961 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:54:04.449320 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:54:04.449402 systemd[1]: Stopped network-cleanup.service. Sep 13 00:54:04.452675 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:54:04.452777 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:54:04.455300 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:54:04.455335 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:54:04.456825 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:54:04.456851 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:54:04.457433 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:54:04.457465 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:54:04.457615 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:54:04.457641 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:54:04.457763 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:54:04.457788 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:54:04.460118 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:54:04.461341 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:54:04.461381 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:54:04.465192 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:54:04.465260 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:54:04.510248 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:54:04.510339 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:54:04.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.512050 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:54:04.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.513500 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:54:04.513534 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:54:04.514582 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:54:04.530100 systemd[1]: Switching root. Sep 13 00:54:04.547806 iscsid[742]: iscsid shutting down. Sep 13 00:54:04.548511 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Sep 13 00:54:04.548544 systemd-journald[198]: Journal stopped Sep 13 00:54:07.080651 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:54:07.080701 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:54:07.080711 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:54:07.080722 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:54:07.080734 kernel: SELinux: policy capability open_perms=1 Sep 13 00:54:07.080746 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:54:07.080759 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:54:07.080772 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:54:07.080784 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:54:07.080793 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:54:07.080802 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:54:07.080813 systemd[1]: Successfully loaded SELinux policy in 36.502ms. Sep 13 00:54:07.080834 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.245ms. Sep 13 00:54:07.080845 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:54:07.080859 systemd[1]: Detected virtualization kvm. Sep 13 00:54:07.080869 systemd[1]: Detected architecture x86-64. Sep 13 00:54:07.080879 systemd[1]: Detected first boot. Sep 13 00:54:07.080894 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:54:07.080904 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:54:07.080914 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:54:07.080925 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:54:07.080940 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:54:07.080951 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:54:07.080964 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:54:07.080979 systemd[1]: Stopped iscsid.service. Sep 13 00:54:07.080993 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:54:07.081004 systemd[1]: Stopped initrd-switch-root.service. Sep 13 00:54:07.081014 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:54:07.081027 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:54:07.081043 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:54:07.081071 systemd[1]: Created slice system-getty.slice. Sep 13 00:54:07.081087 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:54:07.081101 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:54:07.081114 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:54:07.081128 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:54:07.081196 systemd[1]: Created slice user.slice. Sep 13 00:54:07.081213 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:54:07.081226 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:54:07.081239 systemd[1]: Set up automount boot.automount. Sep 13 00:54:07.081250 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:54:07.081260 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 00:54:07.081272 systemd[1]: Stopped target initrd-fs.target. Sep 13 00:54:07.081286 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 00:54:07.081300 systemd[1]: Reached target integritysetup.target. Sep 13 00:54:07.081316 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:54:07.081332 systemd[1]: Reached target remote-fs.target. Sep 13 00:54:07.081348 systemd[1]: Reached target slices.target. Sep 13 00:54:07.081361 systemd[1]: Reached target swap.target. Sep 13 00:54:07.081374 systemd[1]: Reached target torcx.target. Sep 13 00:54:07.081385 systemd[1]: Reached target veritysetup.target. Sep 13 00:54:07.081395 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:54:07.081405 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:54:07.081415 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:54:07.081425 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:54:07.081435 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:54:07.081445 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:54:07.081457 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:54:07.081467 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:54:07.081482 systemd[1]: Mounting media.mount... Sep 13 00:54:07.081493 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:07.081503 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:54:07.081513 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:54:07.081523 systemd[1]: Mounting tmp.mount... Sep 13 00:54:07.081534 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:54:07.081544 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:54:07.081556 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:54:07.081566 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:54:07.081576 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:54:07.081586 systemd[1]: Starting modprobe@drm.service... Sep 13 00:54:07.081597 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:54:07.081607 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:54:07.081617 systemd[1]: Starting modprobe@loop.service... Sep 13 00:54:07.081627 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:54:07.081638 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:54:07.081649 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 00:54:07.081659 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:54:07.081669 kernel: fuse: init (API version 7.34) Sep 13 00:54:07.081680 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:54:07.081690 kernel: loop: module loaded Sep 13 00:54:07.081700 systemd[1]: Stopped systemd-journald.service. Sep 13 00:54:07.081710 systemd[1]: Starting systemd-journald.service... Sep 13 00:54:07.081720 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:54:07.081731 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:54:07.081742 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:54:07.081752 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:54:07.081763 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:54:07.081772 systemd[1]: Stopped verity-setup.service. Sep 13 00:54:07.081783 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:07.081796 systemd-journald[1004]: Journal started Sep 13 00:54:07.081840 systemd-journald[1004]: Runtime Journal (/run/log/journal/64ae95dede604290bfcc465aef24e717) is 6.0M, max 48.5M, 42.5M free. Sep 13 00:54:04.604000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:54:04.884000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:54:04.884000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:54:04.884000 audit: BPF prog-id=10 op=LOAD Sep 13 00:54:04.884000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:54:04.884000 audit: BPF prog-id=11 op=LOAD Sep 13 00:54:04.884000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:54:04.917000 audit[919]: AVC avc: denied { associate } for pid=919 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:54:04.917000 audit[919]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001c58b4 a1=c000146de0 a2=c00014f0c0 a3=32 items=0 ppid=902 pid=919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:04.917000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:54:04.919000 audit[919]: AVC avc: denied { associate } for pid=919 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:54:04.919000 audit[919]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001c5999 a2=1ed a3=0 items=2 ppid=902 pid=919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:04.919000 audit: CWD cwd="/" Sep 13 00:54:04.919000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:04.919000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:04.919000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:54:06.944000 audit: BPF prog-id=12 op=LOAD Sep 13 00:54:06.944000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:54:06.944000 audit: BPF prog-id=13 op=LOAD Sep 13 00:54:06.944000 audit: BPF prog-id=14 op=LOAD Sep 13 00:54:06.944000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:54:06.944000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:54:06.946000 audit: BPF prog-id=15 op=LOAD Sep 13 00:54:06.946000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:54:06.946000 audit: BPF prog-id=16 op=LOAD Sep 13 00:54:06.946000 audit: BPF prog-id=17 op=LOAD Sep 13 00:54:06.946000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:54:06.946000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:54:06.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:06.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:06.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:06.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:06.955000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:54:07.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.062000 audit: BPF prog-id=18 op=LOAD Sep 13 00:54:07.062000 audit: BPF prog-id=19 op=LOAD Sep 13 00:54:07.062000 audit: BPF prog-id=20 op=LOAD Sep 13 00:54:07.062000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:54:07.062000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:54:07.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.079000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:54:07.079000 audit[1004]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe0d20f0f0 a2=4000 a3=7ffe0d20f18c items=0 ppid=1 pid=1004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.079000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:54:04.915876 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:54:06.943533 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:54:04.916096 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:54:06.943543 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:54:07.083299 systemd[1]: Started systemd-journald.service. Sep 13 00:54:04.916112 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:54:06.946879 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:54:04.916166 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 00:54:04.916179 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 00:54:04.916211 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 00:54:04.916223 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 00:54:04.916402 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 00:54:04.916437 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:54:04.916449 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:54:07.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:04.917038 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 00:54:04.917068 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 00:54:04.917083 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 00:54:07.084255 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:54:04.917096 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 00:54:04.917111 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 00:54:04.917137 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 00:54:06.692297 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:06Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:54:06.692604 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:06Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:54:06.692722 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:06Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:54:06.692877 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:06Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:54:06.692919 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:06Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 00:54:06.692969 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-09-13T00:54:06Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 00:54:07.085294 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:54:07.086244 systemd[1]: Mounted media.mount. Sep 13 00:54:07.087131 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:54:07.088131 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:54:07.089111 systemd[1]: Mounted tmp.mount. Sep 13 00:54:07.090120 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:54:07.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.091298 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:54:07.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.092345 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:54:07.092510 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:54:07.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.093645 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:54:07.093812 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:54:07.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.094835 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:54:07.094968 systemd[1]: Finished modprobe@drm.service. Sep 13 00:54:07.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.095930 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:54:07.096107 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:54:07.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.097225 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:54:07.097371 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:54:07.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.098347 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:54:07.098487 systemd[1]: Finished modprobe@loop.service. Sep 13 00:54:07.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.099553 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:54:07.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.100721 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:54:07.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.101889 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:54:07.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.103136 systemd[1]: Reached target network-pre.target. Sep 13 00:54:07.105003 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:54:07.106866 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:54:07.107836 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:54:07.108964 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:54:07.110670 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:54:07.111596 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:54:07.112482 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:54:07.113421 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:54:07.115034 systemd-journald[1004]: Time spent on flushing to /var/log/journal/64ae95dede604290bfcc465aef24e717 is 20.311ms for 1097 entries. Sep 13 00:54:07.115034 systemd-journald[1004]: System Journal (/var/log/journal/64ae95dede604290bfcc465aef24e717) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:54:07.144254 systemd-journald[1004]: Received client request to flush runtime journal. Sep 13 00:54:07.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.116767 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:54:07.118810 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:54:07.122963 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:54:07.124037 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:54:07.125167 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:54:07.126482 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:54:07.137182 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:54:07.138452 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:54:07.140632 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:54:07.141696 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:54:07.145051 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:54:07.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.147863 udevadm[1023]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:54:07.543945 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:54:07.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.545000 audit: BPF prog-id=21 op=LOAD Sep 13 00:54:07.545000 audit: BPF prog-id=22 op=LOAD Sep 13 00:54:07.545000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:54:07.545000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:54:07.546183 systemd[1]: Starting systemd-udevd.service... Sep 13 00:54:07.560856 systemd-udevd[1025]: Using default interface naming scheme 'v252'. Sep 13 00:54:07.572271 systemd[1]: Started systemd-udevd.service. Sep 13 00:54:07.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.574000 audit: BPF prog-id=23 op=LOAD Sep 13 00:54:07.575123 systemd[1]: Starting systemd-networkd.service... Sep 13 00:54:07.580000 audit: BPF prog-id=24 op=LOAD Sep 13 00:54:07.580000 audit: BPF prog-id=25 op=LOAD Sep 13 00:54:07.580000 audit: BPF prog-id=26 op=LOAD Sep 13 00:54:07.581477 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:54:07.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.605987 systemd[1]: Started systemd-userdbd.service. Sep 13 00:54:07.609305 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 13 00:54:07.618582 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:54:07.642555 systemd-networkd[1038]: lo: Link UP Sep 13 00:54:07.642798 systemd-networkd[1038]: lo: Gained carrier Sep 13 00:54:07.643191 systemd-networkd[1038]: Enumeration completed Sep 13 00:54:07.643289 systemd[1]: Started systemd-networkd.service. Sep 13 00:54:07.643825 systemd-networkd[1038]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:54:07.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.644681 systemd-networkd[1038]: eth0: Link UP Sep 13 00:54:07.644780 systemd-networkd[1038]: eth0: Gained carrier Sep 13 00:54:07.650178 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:54:07.654161 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:54:07.657257 systemd-networkd[1038]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:54:07.662000 audit[1037]: AVC avc: denied { confidentiality } for pid=1037 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:54:07.662000 audit[1037]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f83de4d750 a1=338ec a2=7fbf05226bc5 a3=5 items=110 ppid=1025 pid=1037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:07.662000 audit: CWD cwd="/" Sep 13 00:54:07.662000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=1 name=(null) inode=15801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=2 name=(null) inode=15801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=3 name=(null) inode=15802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=4 name=(null) inode=15801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=5 name=(null) inode=15803 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=6 name=(null) inode=15801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=7 name=(null) inode=15804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=8 name=(null) inode=15804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=9 name=(null) inode=15805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=10 name=(null) inode=15804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=11 name=(null) inode=15806 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=12 name=(null) inode=15804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=13 name=(null) inode=15807 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=14 name=(null) inode=15804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=15 name=(null) inode=15808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=16 name=(null) inode=15804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=17 name=(null) inode=15809 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=18 name=(null) inode=15801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=19 name=(null) inode=15810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=20 name=(null) inode=15810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=21 name=(null) inode=15811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=22 name=(null) inode=15810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=23 name=(null) inode=15812 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=24 name=(null) inode=15810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=25 name=(null) inode=15813 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=26 name=(null) inode=15810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=27 name=(null) inode=15814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=28 name=(null) inode=15810 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=29 name=(null) inode=15815 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=30 name=(null) inode=15801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=31 name=(null) inode=15816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=32 name=(null) inode=15816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=33 name=(null) inode=15817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=34 name=(null) inode=15816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=35 name=(null) inode=15818 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=36 name=(null) inode=15816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=37 name=(null) inode=15819 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=38 name=(null) inode=15816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=39 name=(null) inode=15820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=40 name=(null) inode=15816 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=41 name=(null) inode=15821 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=42 name=(null) inode=15801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=43 name=(null) inode=15822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=44 name=(null) inode=15822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=45 name=(null) inode=15823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=46 name=(null) inode=15822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=47 name=(null) inode=15824 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=48 name=(null) inode=15822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=49 name=(null) inode=15825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=50 name=(null) inode=15822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=51 name=(null) inode=15826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=52 name=(null) inode=15822 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=53 name=(null) inode=15827 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=55 name=(null) inode=15828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=56 name=(null) inode=15828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=57 name=(null) inode=15829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=58 name=(null) inode=15828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=59 name=(null) inode=15830 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=60 name=(null) inode=15828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=61 name=(null) inode=15831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=62 name=(null) inode=15831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=63 name=(null) inode=15832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=64 name=(null) inode=15831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=65 name=(null) inode=15833 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=66 name=(null) inode=15831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=67 name=(null) inode=15834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=68 name=(null) inode=15831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=69 name=(null) inode=15835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=70 name=(null) inode=15831 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=71 name=(null) inode=15836 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=72 name=(null) inode=15828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=73 name=(null) inode=15837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=74 name=(null) inode=15837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=75 name=(null) inode=15838 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=76 name=(null) inode=15837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=77 name=(null) inode=15839 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=78 name=(null) inode=15837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=79 name=(null) inode=15840 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=80 name=(null) inode=15837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=81 name=(null) inode=15841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=82 name=(null) inode=15837 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=83 name=(null) inode=15842 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=84 name=(null) inode=15828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=85 name=(null) inode=15843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=86 name=(null) inode=15843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=87 name=(null) inode=15844 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=88 name=(null) inode=15843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=89 name=(null) inode=15845 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=90 name=(null) inode=15843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=91 name=(null) inode=15846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=92 name=(null) inode=15843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=93 name=(null) inode=15847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=94 name=(null) inode=15843 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=95 name=(null) inode=15848 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=96 name=(null) inode=15828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=97 name=(null) inode=15849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=98 name=(null) inode=15849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=99 name=(null) inode=15850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=100 name=(null) inode=15849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=101 name=(null) inode=15851 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=102 name=(null) inode=15849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=103 name=(null) inode=15852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=104 name=(null) inode=15849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=105 name=(null) inode=15853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=106 name=(null) inode=15849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=107 name=(null) inode=15854 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PATH item=109 name=(null) inode=15855 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:54:07.662000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:54:07.679164 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:54:07.686534 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:54:07.686885 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 00:54:07.687041 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:54:07.699167 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:54:07.740206 kernel: kvm: Nested Virtualization enabled Sep 13 00:54:07.740457 kernel: SVM: kvm: Nested Paging enabled Sep 13 00:54:07.740520 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 13 00:54:07.740559 kernel: SVM: Virtual GIF supported Sep 13 00:54:07.756171 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:54:07.801592 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:54:07.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.803673 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:54:07.810759 lvm[1060]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:54:07.836256 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:54:07.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.837347 systemd[1]: Reached target cryptsetup.target. Sep 13 00:54:07.839240 systemd[1]: Starting lvm2-activation.service... Sep 13 00:54:07.843503 lvm[1061]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:54:07.870945 systemd[1]: Finished lvm2-activation.service. Sep 13 00:54:07.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.871875 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:54:07.872693 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:54:07.872714 systemd[1]: Reached target local-fs.target. Sep 13 00:54:07.873486 systemd[1]: Reached target machines.target. Sep 13 00:54:07.875195 systemd[1]: Starting ldconfig.service... Sep 13 00:54:07.876138 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:54:07.876185 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:54:07.876953 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:54:07.878439 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:54:07.880380 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:54:07.882189 systemd[1]: Starting systemd-sysext.service... Sep 13 00:54:07.883694 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1063 (bootctl) Sep 13 00:54:07.888335 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:54:07.889863 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:54:07.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.894425 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:54:07.898199 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:54:07.898335 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:54:07.906174 kernel: loop0: detected capacity change from 0 to 224512 Sep 13 00:54:07.919752 systemd-fsck[1071]: fsck.fat 4.2 (2021-01-31) Sep 13 00:54:07.919752 systemd-fsck[1071]: /dev/vda1: 790 files, 120761/258078 clusters Sep 13 00:54:07.921565 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:54:07.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:07.924675 systemd[1]: Mounting boot.mount... Sep 13 00:54:07.939118 systemd[1]: Mounted boot.mount. Sep 13 00:54:08.113179 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:54:08.116202 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:54:08.116738 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:54:08.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.118779 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:54:08.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.125160 kernel: loop1: detected capacity change from 0 to 224512 Sep 13 00:54:08.128523 (sd-sysext)[1076]: Using extensions 'kubernetes'. Sep 13 00:54:08.129567 (sd-sysext)[1076]: Merged extensions into '/usr'. Sep 13 00:54:08.143771 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:08.145127 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:54:08.146010 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:54:08.147102 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:54:08.148744 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:54:08.150470 systemd[1]: Starting modprobe@loop.service... Sep 13 00:54:08.151217 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:54:08.151314 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:54:08.151405 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:08.153605 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:54:08.154603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:54:08.154698 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:54:08.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.155789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:54:08.155881 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:54:08.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.157158 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:54:08.157398 systemd[1]: Finished modprobe@loop.service. Sep 13 00:54:08.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.158662 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:54:08.158754 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:54:08.159632 systemd[1]: Finished systemd-sysext.service. Sep 13 00:54:08.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.161737 systemd[1]: Starting ensure-sysext.service... Sep 13 00:54:08.163413 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:54:08.167749 ldconfig[1062]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:54:08.167808 systemd[1]: Reloading. Sep 13 00:54:08.174806 systemd-tmpfiles[1084]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:54:08.176746 systemd-tmpfiles[1084]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:54:08.179501 systemd-tmpfiles[1084]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:54:08.218090 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T00:54:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:54:08.218123 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T00:54:08Z" level=info msg="torcx already run" Sep 13 00:54:08.280072 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:54:08.280091 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:54:08.297058 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:54:08.348000 audit: BPF prog-id=27 op=LOAD Sep 13 00:54:08.348000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:54:08.349000 audit: BPF prog-id=28 op=LOAD Sep 13 00:54:08.349000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:54:08.349000 audit: BPF prog-id=29 op=LOAD Sep 13 00:54:08.349000 audit: BPF prog-id=30 op=LOAD Sep 13 00:54:08.349000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:54:08.349000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:54:08.351000 audit: BPF prog-id=31 op=LOAD Sep 13 00:54:08.351000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:54:08.351000 audit: BPF prog-id=32 op=LOAD Sep 13 00:54:08.351000 audit: BPF prog-id=33 op=LOAD Sep 13 00:54:08.351000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:54:08.351000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:54:08.352000 audit: BPF prog-id=34 op=LOAD Sep 13 00:54:08.352000 audit: BPF prog-id=35 op=LOAD Sep 13 00:54:08.352000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:54:08.352000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:54:08.354233 systemd[1]: Finished ldconfig.service. Sep 13 00:54:08.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.356070 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:54:08.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.359447 systemd[1]: Starting audit-rules.service... Sep 13 00:54:08.361243 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:54:08.363116 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:54:08.364000 audit: BPF prog-id=36 op=LOAD Sep 13 00:54:08.365349 systemd[1]: Starting systemd-resolved.service... Sep 13 00:54:08.366000 audit: BPF prog-id=37 op=LOAD Sep 13 00:54:08.367338 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:54:08.369893 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:54:08.371225 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:54:08.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.374000 audit[1158]: SYSTEM_BOOT pid=1158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.377666 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:08.378120 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:54:08.379836 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:54:08.382130 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:54:08.383905 systemd[1]: Starting modprobe@loop.service... Sep 13 00:54:08.384665 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:54:08.384881 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:54:08.385083 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:54:08.385286 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:08.386561 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:54:08.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.388054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:54:08.388169 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:54:08.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.389682 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:54:08.389813 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:54:08.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.391324 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:54:08.391453 systemd[1]: Finished modprobe@loop.service. Sep 13 00:54:08.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.393542 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:54:08.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:54:08.398000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:54:08.398000 audit[1170]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffde8fb5390 a2=420 a3=0 items=0 ppid=1147 pid=1170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:54:08.398000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:54:08.398985 augenrules[1170]: No rules Sep 13 00:54:08.398943 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:08.399523 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:54:08.400706 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:54:08.402474 systemd[1]: Starting modprobe@drm.service... Sep 13 00:54:08.404088 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:54:08.405739 systemd[1]: Starting modprobe@loop.service... Sep 13 00:54:08.406488 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:54:08.406576 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:54:08.407736 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:54:08.409832 systemd[1]: Starting systemd-update-done.service... Sep 13 00:54:08.410683 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:54:08.410784 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:54:08.411924 systemd[1]: Finished audit-rules.service. Sep 13 00:54:08.413099 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:54:08.413302 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:54:08.414443 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:54:08.414540 systemd[1]: Finished modprobe@drm.service. Sep 13 00:54:08.415844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:54:08.416059 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:54:08.417569 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:54:08.417752 systemd[1]: Finished modprobe@loop.service. Sep 13 00:54:08.418995 systemd[1]: Finished systemd-update-done.service. Sep 13 00:54:08.420543 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:54:08.420630 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:54:08.421748 systemd[1]: Finished ensure-sysext.service. Sep 13 00:54:08.426863 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:54:08.427794 systemd[1]: Reached target time-set.target. Sep 13 00:54:09.882999 systemd-timesyncd[1155]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:54:09.883041 systemd-timesyncd[1155]: Initial clock synchronization to Sat 2025-09-13 00:54:09.882937 UTC. Sep 13 00:54:09.888483 systemd-resolved[1152]: Positive Trust Anchors: Sep 13 00:54:09.888496 systemd-resolved[1152]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:54:09.888522 systemd-resolved[1152]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:54:09.894854 systemd-resolved[1152]: Defaulting to hostname 'linux'. Sep 13 00:54:09.896101 systemd[1]: Started systemd-resolved.service. Sep 13 00:54:09.896978 systemd[1]: Reached target network.target. Sep 13 00:54:09.897724 systemd[1]: Reached target nss-lookup.target. Sep 13 00:54:09.898486 systemd[1]: Reached target sysinit.target. Sep 13 00:54:09.899317 systemd[1]: Started motdgen.path. Sep 13 00:54:09.899996 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:54:09.901171 systemd[1]: Started logrotate.timer. Sep 13 00:54:09.901935 systemd[1]: Started mdadm.timer. Sep 13 00:54:09.902578 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:54:09.903387 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:54:09.903413 systemd[1]: Reached target paths.target. Sep 13 00:54:09.904127 systemd[1]: Reached target timers.target. Sep 13 00:54:09.905125 systemd[1]: Listening on dbus.socket. Sep 13 00:54:09.906737 systemd[1]: Starting docker.socket... Sep 13 00:54:09.909504 systemd[1]: Listening on sshd.socket. Sep 13 00:54:09.910318 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:54:09.910646 systemd[1]: Listening on docker.socket. Sep 13 00:54:09.911444 systemd[1]: Reached target sockets.target. Sep 13 00:54:09.912214 systemd[1]: Reached target basic.target. Sep 13 00:54:09.912973 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:54:09.912995 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:54:09.913770 systemd[1]: Starting containerd.service... Sep 13 00:54:09.915408 systemd[1]: Starting dbus.service... Sep 13 00:54:09.916812 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:54:09.918498 systemd[1]: Starting extend-filesystems.service... Sep 13 00:54:09.919437 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:54:09.920293 systemd[1]: Starting motdgen.service... Sep 13 00:54:09.921330 jq[1186]: false Sep 13 00:54:09.922265 systemd[1]: Starting prepare-helm.service... Sep 13 00:54:09.923739 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:54:09.926158 systemd[1]: Starting sshd-keygen.service... Sep 13 00:54:09.929302 systemd[1]: Starting systemd-logind.service... Sep 13 00:54:09.930030 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:54:09.930073 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:54:09.930385 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:54:09.932827 systemd[1]: Starting update-engine.service... Sep 13 00:54:09.934649 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:54:09.935733 extend-filesystems[1187]: Found loop1 Sep 13 00:54:09.935733 extend-filesystems[1187]: Found sr0 Sep 13 00:54:09.935733 extend-filesystems[1187]: Found vda Sep 13 00:54:09.935733 extend-filesystems[1187]: Found vda1 Sep 13 00:54:09.935733 extend-filesystems[1187]: Found vda2 Sep 13 00:54:09.935733 extend-filesystems[1187]: Found vda3 Sep 13 00:54:09.935733 extend-filesystems[1187]: Found usr Sep 13 00:54:09.935733 extend-filesystems[1187]: Found vda4 Sep 13 00:54:09.935733 extend-filesystems[1187]: Found vda6 Sep 13 00:54:09.935733 extend-filesystems[1187]: Found vda7 Sep 13 00:54:09.935733 extend-filesystems[1187]: Found vda9 Sep 13 00:54:09.935733 extend-filesystems[1187]: Checking size of /dev/vda9 Sep 13 00:54:09.982719 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:54:09.982752 jq[1204]: true Sep 13 00:54:09.982892 extend-filesystems[1187]: Resized partition /dev/vda9 Sep 13 00:54:09.984826 update_engine[1199]: I0913 00:54:09.969034 1199 main.cc:92] Flatcar Update Engine starting Sep 13 00:54:09.984826 update_engine[1199]: I0913 00:54:09.971567 1199 update_check_scheduler.cc:74] Next update check in 7m20s Sep 13 00:54:09.936942 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:54:09.937886 dbus-daemon[1185]: [system] SELinux support is enabled Sep 13 00:54:09.985252 extend-filesystems[1224]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:54:09.937119 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:54:09.937983 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:54:09.987981 tar[1207]: linux-amd64/LICENSE Sep 13 00:54:09.987981 tar[1207]: linux-amd64/helm Sep 13 00:54:09.938126 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:54:09.939472 systemd[1]: Started dbus.service. Sep 13 00:54:09.988291 jq[1212]: true Sep 13 00:54:09.945429 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:54:09.945454 systemd[1]: Reached target system-config.target. Sep 13 00:54:09.946510 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:54:09.946525 systemd[1]: Reached target user-config.target. Sep 13 00:54:09.962932 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:54:09.963073 systemd[1]: Finished motdgen.service. Sep 13 00:54:09.971725 systemd[1]: Started update-engine.service. Sep 13 00:54:09.979238 systemd[1]: Started locksmithd.service. Sep 13 00:54:09.990821 env[1211]: time="2025-09-13T00:54:09.990193091Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:54:09.998489 systemd-logind[1197]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:54:09.998513 systemd-logind[1197]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:54:09.999156 systemd-logind[1197]: New seat seat0. Sep 13 00:54:10.000884 systemd[1]: Started systemd-logind.service. Sep 13 00:54:10.011682 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:54:10.016412 env[1211]: time="2025-09-13T00:54:10.016371235Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:54:10.036649 env[1211]: time="2025-09-13T00:54:10.036617356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:54:10.036856 extend-filesystems[1224]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:54:10.036856 extend-filesystems[1224]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:54:10.036856 extend-filesystems[1224]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:54:10.041980 extend-filesystems[1187]: Resized filesystem in /dev/vda9 Sep 13 00:54:10.044129 env[1211]: time="2025-09-13T00:54:10.037807458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:54:10.044129 env[1211]: time="2025-09-13T00:54:10.037829399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:54:10.044129 env[1211]: time="2025-09-13T00:54:10.038005769Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:54:10.044129 env[1211]: time="2025-09-13T00:54:10.038019796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:54:10.044129 env[1211]: time="2025-09-13T00:54:10.038031468Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:54:10.044129 env[1211]: time="2025-09-13T00:54:10.038040254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:54:10.044129 env[1211]: time="2025-09-13T00:54:10.038095598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:54:10.044129 env[1211]: time="2025-09-13T00:54:10.038269444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:54:10.044129 env[1211]: time="2025-09-13T00:54:10.038368229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:54:10.044129 env[1211]: time="2025-09-13T00:54:10.038381554Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:54:10.037593 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:54:10.044392 bash[1238]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:54:10.044495 env[1211]: time="2025-09-13T00:54:10.038419856Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:54:10.044495 env[1211]: time="2025-09-13T00:54:10.038430656Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:54:10.037755 systemd[1]: Finished extend-filesystems.service. Sep 13 00:54:10.041521 locksmithd[1231]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:54:10.042727 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:54:10.045685 env[1211]: time="2025-09-13T00:54:10.045615518Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:54:10.045685 env[1211]: time="2025-09-13T00:54:10.045667575Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:54:10.045685 env[1211]: time="2025-09-13T00:54:10.045684056Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:54:10.045757 env[1211]: time="2025-09-13T00:54:10.045715696Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:54:10.045757 env[1211]: time="2025-09-13T00:54:10.045729822Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:54:10.045757 env[1211]: time="2025-09-13T00:54:10.045741544Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:54:10.045757 env[1211]: time="2025-09-13T00:54:10.045753436Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:54:10.045857 env[1211]: time="2025-09-13T00:54:10.045766381Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:54:10.045857 env[1211]: time="2025-09-13T00:54:10.045778253Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:54:10.045857 env[1211]: time="2025-09-13T00:54:10.045790836Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:54:10.045857 env[1211]: time="2025-09-13T00:54:10.045801246Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:54:10.045857 env[1211]: time="2025-09-13T00:54:10.045814140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:54:10.045966 env[1211]: time="2025-09-13T00:54:10.045913957Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:54:10.046013 env[1211]: time="2025-09-13T00:54:10.045997254Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:54:10.046229 env[1211]: time="2025-09-13T00:54:10.046203190Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:54:10.046259 env[1211]: time="2025-09-13T00:54:10.046230000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:54:10.046259 env[1211]: time="2025-09-13T00:54:10.046243375Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:54:10.046299 env[1211]: time="2025-09-13T00:54:10.046284723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:54:10.046299 env[1211]: time="2025-09-13T00:54:10.046297366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:54:10.046348 env[1211]: time="2025-09-13T00:54:10.046308537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:54:10.046348 env[1211]: time="2025-09-13T00:54:10.046319618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:54:10.046348 env[1211]: time="2025-09-13T00:54:10.046331520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:54:10.046348 env[1211]: time="2025-09-13T00:54:10.046342401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:54:10.046430 env[1211]: time="2025-09-13T00:54:10.046352560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:54:10.046430 env[1211]: time="2025-09-13T00:54:10.046362538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:54:10.046430 env[1211]: time="2025-09-13T00:54:10.046374371Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:54:10.046510 env[1211]: time="2025-09-13T00:54:10.046470341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:54:10.046510 env[1211]: time="2025-09-13T00:54:10.046484577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:54:10.046510 env[1211]: time="2025-09-13T00:54:10.046495277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:54:10.046510 env[1211]: time="2025-09-13T00:54:10.046505517Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:54:10.046598 env[1211]: time="2025-09-13T00:54:10.046518621Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:54:10.046598 env[1211]: time="2025-09-13T00:54:10.046531746Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:54:10.046598 env[1211]: time="2025-09-13T00:54:10.046557083Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:54:10.046598 env[1211]: time="2025-09-13T00:54:10.046588232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:54:10.046843 env[1211]: time="2025-09-13T00:54:10.046790912Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:54:10.046843 env[1211]: time="2025-09-13T00:54:10.046842078Z" level=info msg="Connect containerd service" Sep 13 00:54:10.047419 env[1211]: time="2025-09-13T00:54:10.046871853Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:54:10.047419 env[1211]: time="2025-09-13T00:54:10.047308202Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:54:10.047533 env[1211]: time="2025-09-13T00:54:10.047505461Z" level=info msg="Start subscribing containerd event" Sep 13 00:54:10.047572 env[1211]: time="2025-09-13T00:54:10.047564171Z" level=info msg="Start recovering state" Sep 13 00:54:10.047648 env[1211]: time="2025-09-13T00:54:10.047633231Z" level=info msg="Start event monitor" Sep 13 00:54:10.047704 env[1211]: time="2025-09-13T00:54:10.047649421Z" level=info msg="Start snapshots syncer" Sep 13 00:54:10.047704 env[1211]: time="2025-09-13T00:54:10.047668006Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:54:10.047704 env[1211]: time="2025-09-13T00:54:10.047675390Z" level=info msg="Start streaming server" Sep 13 00:54:10.047821 env[1211]: time="2025-09-13T00:54:10.047804342Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:54:10.047852 env[1211]: time="2025-09-13T00:54:10.047838386Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:54:10.047905 env[1211]: time="2025-09-13T00:54:10.047891826Z" level=info msg="containerd successfully booted in 0.063006s" Sep 13 00:54:10.047942 systemd[1]: Started containerd.service. Sep 13 00:54:10.168824 systemd-networkd[1038]: eth0: Gained IPv6LL Sep 13 00:54:10.178019 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:54:10.179315 systemd[1]: Reached target network-online.target. Sep 13 00:54:10.180826 sshd_keygen[1210]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:54:10.181402 systemd[1]: Starting kubelet.service... Sep 13 00:54:10.216168 systemd[1]: Finished sshd-keygen.service. Sep 13 00:54:10.218377 systemd[1]: Starting issuegen.service... Sep 13 00:54:10.224383 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:54:10.224484 systemd[1]: Finished issuegen.service. Sep 13 00:54:10.226165 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:54:10.284272 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:54:10.286211 systemd[1]: Started getty@tty1.service. Sep 13 00:54:10.287862 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:54:10.288868 systemd[1]: Reached target getty.target. Sep 13 00:54:10.616097 tar[1207]: linux-amd64/README.md Sep 13 00:54:10.620763 systemd[1]: Finished prepare-helm.service. Sep 13 00:54:11.744192 systemd[1]: Started kubelet.service. Sep 13 00:54:11.745776 systemd[1]: Reached target multi-user.target. Sep 13 00:54:11.748001 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:54:11.755365 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:54:11.755484 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:54:11.756583 systemd[1]: Startup finished in 599ms (kernel) + 4.851s (initrd) + 5.736s (userspace) = 11.187s. Sep 13 00:54:12.421681 kubelet[1266]: E0913 00:54:12.421617 1266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:54:12.423154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:54:12.423260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:54:12.423480 systemd[1]: kubelet.service: Consumed 2.137s CPU time. Sep 13 00:54:14.138335 systemd[1]: Created slice system-sshd.slice. Sep 13 00:54:14.139437 systemd[1]: Started sshd@0-10.0.0.140:22-10.0.0.1:41506.service. Sep 13 00:54:14.177795 sshd[1275]: Accepted publickey for core from 10.0.0.1 port 41506 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:14.179337 sshd[1275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:14.186519 systemd[1]: Created slice user-500.slice. Sep 13 00:54:14.187616 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:54:14.189273 systemd-logind[1197]: New session 1 of user core. Sep 13 00:54:14.195347 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:54:14.196378 systemd[1]: Starting user@500.service... Sep 13 00:54:14.198768 (systemd)[1278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:14.262099 systemd[1278]: Queued start job for default target default.target. Sep 13 00:54:14.262570 systemd[1278]: Reached target paths.target. Sep 13 00:54:14.262592 systemd[1278]: Reached target sockets.target. Sep 13 00:54:14.262608 systemd[1278]: Reached target timers.target. Sep 13 00:54:14.262622 systemd[1278]: Reached target basic.target. Sep 13 00:54:14.262675 systemd[1278]: Reached target default.target. Sep 13 00:54:14.262705 systemd[1278]: Startup finished in 59ms. Sep 13 00:54:14.262744 systemd[1]: Started user@500.service. Sep 13 00:54:14.263530 systemd[1]: Started session-1.scope. Sep 13 00:54:14.314836 systemd[1]: Started sshd@1-10.0.0.140:22-10.0.0.1:41510.service. Sep 13 00:54:14.353149 sshd[1287]: Accepted publickey for core from 10.0.0.1 port 41510 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:14.354507 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:14.358139 systemd-logind[1197]: New session 2 of user core. Sep 13 00:54:14.359164 systemd[1]: Started session-2.scope. Sep 13 00:54:14.410952 sshd[1287]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:14.413556 systemd[1]: sshd@1-10.0.0.140:22-10.0.0.1:41510.service: Deactivated successfully. Sep 13 00:54:14.414095 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:54:14.414582 systemd-logind[1197]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:54:14.415500 systemd[1]: Started sshd@2-10.0.0.140:22-10.0.0.1:41526.service. Sep 13 00:54:14.416066 systemd-logind[1197]: Removed session 2. Sep 13 00:54:14.450055 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 41526 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:14.451154 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:14.454180 systemd-logind[1197]: New session 3 of user core. Sep 13 00:54:14.454972 systemd[1]: Started session-3.scope. Sep 13 00:54:14.504012 sshd[1293]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:14.506713 systemd[1]: sshd@2-10.0.0.140:22-10.0.0.1:41526.service: Deactivated successfully. Sep 13 00:54:14.507190 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:54:14.507722 systemd-logind[1197]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:54:14.508736 systemd[1]: Started sshd@3-10.0.0.140:22-10.0.0.1:41540.service. Sep 13 00:54:14.509285 systemd-logind[1197]: Removed session 3. Sep 13 00:54:14.543507 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 41540 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:14.544408 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:14.547170 systemd-logind[1197]: New session 4 of user core. Sep 13 00:54:14.548096 systemd[1]: Started session-4.scope. Sep 13 00:54:14.600144 sshd[1299]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:14.602773 systemd[1]: sshd@3-10.0.0.140:22-10.0.0.1:41540.service: Deactivated successfully. Sep 13 00:54:14.603249 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:54:14.603713 systemd-logind[1197]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:54:14.604565 systemd[1]: Started sshd@4-10.0.0.140:22-10.0.0.1:41552.service. Sep 13 00:54:14.605184 systemd-logind[1197]: Removed session 4. Sep 13 00:54:14.639045 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 41552 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:54:14.640166 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:54:14.643497 systemd-logind[1197]: New session 5 of user core. Sep 13 00:54:14.644393 systemd[1]: Started session-5.scope. Sep 13 00:54:14.699871 sudo[1309]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:54:14.700120 sudo[1309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:54:14.723179 systemd[1]: Starting docker.service... Sep 13 00:54:14.818420 env[1321]: time="2025-09-13T00:54:14.818362166Z" level=info msg="Starting up" Sep 13 00:54:14.819508 env[1321]: time="2025-09-13T00:54:14.819484100Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:54:14.819508 env[1321]: time="2025-09-13T00:54:14.819497375Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:54:14.819581 env[1321]: time="2025-09-13T00:54:14.819516200Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:54:14.819581 env[1321]: time="2025-09-13T00:54:14.819528533Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:54:14.821607 env[1321]: time="2025-09-13T00:54:14.821571284Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:54:14.821607 env[1321]: time="2025-09-13T00:54:14.821595579Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:54:14.821706 env[1321]: time="2025-09-13T00:54:14.821612060Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:54:14.821706 env[1321]: time="2025-09-13T00:54:14.821622269Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:54:15.377752 env[1321]: time="2025-09-13T00:54:15.377690287Z" level=info msg="Loading containers: start." Sep 13 00:54:15.488694 kernel: Initializing XFRM netlink socket Sep 13 00:54:15.513850 env[1321]: time="2025-09-13T00:54:15.513803148Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:54:15.564023 systemd-networkd[1038]: docker0: Link UP Sep 13 00:54:15.576825 env[1321]: time="2025-09-13T00:54:15.576783146Z" level=info msg="Loading containers: done." Sep 13 00:54:15.588574 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1139791301-merged.mount: Deactivated successfully. Sep 13 00:54:15.590190 env[1321]: time="2025-09-13T00:54:15.590150580Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:54:15.590330 env[1321]: time="2025-09-13T00:54:15.590306412Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:54:15.590421 env[1321]: time="2025-09-13T00:54:15.590392583Z" level=info msg="Daemon has completed initialization" Sep 13 00:54:15.605762 systemd[1]: Started docker.service. Sep 13 00:54:15.692895 env[1321]: time="2025-09-13T00:54:15.692765713Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:54:16.580866 env[1211]: time="2025-09-13T00:54:16.580807862Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 00:54:18.421783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1203812696.mount: Deactivated successfully. Sep 13 00:54:20.375573 env[1211]: time="2025-09-13T00:54:20.375512051Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:20.377390 env[1211]: time="2025-09-13T00:54:20.377361689Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:20.379572 env[1211]: time="2025-09-13T00:54:20.379524204Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:20.381172 env[1211]: time="2025-09-13T00:54:20.381119345Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:20.381930 env[1211]: time="2025-09-13T00:54:20.381901040Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 00:54:20.383204 env[1211]: time="2025-09-13T00:54:20.383168317Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 00:54:22.674066 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:54:22.674303 systemd[1]: Stopped kubelet.service. Sep 13 00:54:22.674350 systemd[1]: kubelet.service: Consumed 2.137s CPU time. Sep 13 00:54:22.676203 systemd[1]: Starting kubelet.service... Sep 13 00:54:22.905090 systemd[1]: Started kubelet.service. Sep 13 00:54:22.969678 kubelet[1458]: E0913 00:54:22.969352 1458 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:54:22.972398 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:54:22.972515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:54:23.825909 env[1211]: time="2025-09-13T00:54:23.825841227Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:23.905335 env[1211]: time="2025-09-13T00:54:23.905257322Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:23.909719 env[1211]: time="2025-09-13T00:54:23.909671098Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:23.911419 env[1211]: time="2025-09-13T00:54:23.911378780Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:23.913170 env[1211]: time="2025-09-13T00:54:23.913127319Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 00:54:23.913846 env[1211]: time="2025-09-13T00:54:23.913816781Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 00:54:27.016227 env[1211]: time="2025-09-13T00:54:27.016144405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:27.018694 env[1211]: time="2025-09-13T00:54:27.018625798Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:27.021185 env[1211]: time="2025-09-13T00:54:27.021154669Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:27.023239 env[1211]: time="2025-09-13T00:54:27.023191749Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:27.023990 env[1211]: time="2025-09-13T00:54:27.023953848Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 00:54:27.024601 env[1211]: time="2025-09-13T00:54:27.024571426Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 00:54:29.495450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157223313.mount: Deactivated successfully. Sep 13 00:54:30.555918 env[1211]: time="2025-09-13T00:54:30.555851304Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:30.558273 env[1211]: time="2025-09-13T00:54:30.558227360Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:30.559737 env[1211]: time="2025-09-13T00:54:30.559697135Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:30.561287 env[1211]: time="2025-09-13T00:54:30.561258122Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:30.561743 env[1211]: time="2025-09-13T00:54:30.561708617Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 00:54:30.562280 env[1211]: time="2025-09-13T00:54:30.562247577Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:54:31.279183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753439569.mount: Deactivated successfully. Sep 13 00:54:32.995520 env[1211]: time="2025-09-13T00:54:32.995454249Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:32.998226 env[1211]: time="2025-09-13T00:54:32.998178117Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:33.000024 env[1211]: time="2025-09-13T00:54:32.999990896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:33.001874 env[1211]: time="2025-09-13T00:54:33.001850492Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:33.002516 env[1211]: time="2025-09-13T00:54:33.002491264Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:54:33.003025 env[1211]: time="2025-09-13T00:54:33.003005438Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:54:33.223249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:54:33.223432 systemd[1]: Stopped kubelet.service. Sep 13 00:54:33.224711 systemd[1]: Starting kubelet.service... Sep 13 00:54:33.310006 systemd[1]: Started kubelet.service. Sep 13 00:54:33.843341 kubelet[1470]: E0913 00:54:33.843285 1470 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:54:33.844826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:54:33.844931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:54:34.481072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2613195351.mount: Deactivated successfully. Sep 13 00:54:34.486583 env[1211]: time="2025-09-13T00:54:34.486542159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:34.488418 env[1211]: time="2025-09-13T00:54:34.488387499Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:34.489883 env[1211]: time="2025-09-13T00:54:34.489857986Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:34.491239 env[1211]: time="2025-09-13T00:54:34.491210111Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:34.491643 env[1211]: time="2025-09-13T00:54:34.491615521Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:54:34.492171 env[1211]: time="2025-09-13T00:54:34.492147689Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 00:54:35.070640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1180863305.mount: Deactivated successfully. Sep 13 00:54:38.490766 env[1211]: time="2025-09-13T00:54:38.490707828Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:38.493332 env[1211]: time="2025-09-13T00:54:38.493290601Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:38.494938 env[1211]: time="2025-09-13T00:54:38.494907052Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:38.496915 env[1211]: time="2025-09-13T00:54:38.496880502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:38.497631 env[1211]: time="2025-09-13T00:54:38.497594431Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 00:54:40.945568 systemd[1]: Stopped kubelet.service. Sep 13 00:54:40.947466 systemd[1]: Starting kubelet.service... Sep 13 00:54:40.976893 systemd[1]: Reloading. Sep 13 00:54:41.051769 /usr/lib/systemd/system-generators/torcx-generator[1524]: time="2025-09-13T00:54:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:54:41.051808 /usr/lib/systemd/system-generators/torcx-generator[1524]: time="2025-09-13T00:54:41Z" level=info msg="torcx already run" Sep 13 00:54:41.399798 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:54:41.399817 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:54:41.418709 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:54:41.496557 systemd[1]: Started kubelet.service. Sep 13 00:54:41.497846 systemd[1]: Stopping kubelet.service... Sep 13 00:54:41.498066 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:54:41.498207 systemd[1]: Stopped kubelet.service. Sep 13 00:54:41.499613 systemd[1]: Starting kubelet.service... Sep 13 00:54:41.587951 systemd[1]: Started kubelet.service. Sep 13 00:54:41.665877 kubelet[1572]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:41.666252 kubelet[1572]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:54:41.666252 kubelet[1572]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:41.666361 kubelet[1572]: I0913 00:54:41.666280 1572 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:54:41.892093 kubelet[1572]: I0913 00:54:41.892055 1572 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:54:41.892093 kubelet[1572]: I0913 00:54:41.892087 1572 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:54:41.892419 kubelet[1572]: I0913 00:54:41.892395 1572 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:54:41.921099 kubelet[1572]: E0913 00:54:41.920971 1572 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:41.923597 kubelet[1572]: I0913 00:54:41.923576 1572 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:54:41.929935 kubelet[1572]: E0913 00:54:41.929898 1572 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:54:41.929935 kubelet[1572]: I0913 00:54:41.929934 1572 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:54:41.934288 kubelet[1572]: I0913 00:54:41.934264 1572 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:54:41.937369 kubelet[1572]: I0913 00:54:41.937315 1572 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:54:41.937593 kubelet[1572]: I0913 00:54:41.937358 1572 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:54:41.937729 kubelet[1572]: I0913 00:54:41.937606 1572 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:54:41.937729 kubelet[1572]: I0913 00:54:41.937620 1572 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:54:41.937839 kubelet[1572]: I0913 00:54:41.937824 1572 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:41.940786 kubelet[1572]: I0913 00:54:41.940763 1572 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:54:41.940847 kubelet[1572]: I0913 00:54:41.940796 1572 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:54:41.940847 kubelet[1572]: I0913 00:54:41.940825 1572 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:54:41.940847 kubelet[1572]: I0913 00:54:41.940842 1572 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:54:41.958460 kubelet[1572]: I0913 00:54:41.958429 1572 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:54:41.958878 kubelet[1572]: I0913 00:54:41.958849 1572 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:54:41.960384 kubelet[1572]: W0913 00:54:41.960341 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 13 00:54:41.960435 kubelet[1572]: E0913 00:54:41.960415 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:41.964932 kubelet[1572]: W0913 00:54:41.964906 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 13 00:54:41.964982 kubelet[1572]: E0913 00:54:41.964936 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:41.965590 kubelet[1572]: W0913 00:54:41.965563 1572 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:54:41.968215 kubelet[1572]: I0913 00:54:41.968188 1572 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:54:41.968277 kubelet[1572]: I0913 00:54:41.968264 1572 server.go:1287] "Started kubelet" Sep 13 00:54:41.969456 kubelet[1572]: I0913 00:54:41.969138 1572 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:54:41.969456 kubelet[1572]: I0913 00:54:41.969440 1572 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:54:41.969526 kubelet[1572]: I0913 00:54:41.969496 1572 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:54:41.971377 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:54:41.971534 kubelet[1572]: I0913 00:54:41.971513 1572 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:54:41.972214 kubelet[1572]: I0913 00:54:41.972188 1572 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:54:41.972307 kubelet[1572]: I0913 00:54:41.972281 1572 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:54:41.973933 kubelet[1572]: E0913 00:54:41.973912 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:41.973986 kubelet[1572]: I0913 00:54:41.973956 1572 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:54:41.974294 kubelet[1572]: I0913 00:54:41.974274 1572 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:54:41.974369 kubelet[1572]: I0913 00:54:41.974352 1572 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:54:41.975068 kubelet[1572]: W0913 00:54:41.974734 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 13 00:54:41.975068 kubelet[1572]: E0913 00:54:41.974939 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:41.975165 kubelet[1572]: E0913 00:54:41.975144 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="200ms" Sep 13 00:54:41.975293 kubelet[1572]: I0913 00:54:41.975283 1572 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:54:41.975345 kubelet[1572]: I0913 00:54:41.975334 1572 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:54:41.977912 kubelet[1572]: E0913 00:54:41.977895 1572 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:54:41.978746 kubelet[1572]: I0913 00:54:41.978724 1572 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:54:41.984024 kubelet[1572]: I0913 00:54:41.983994 1572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:54:41.984995 kubelet[1572]: I0913 00:54:41.984969 1572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:54:41.984995 kubelet[1572]: I0913 00:54:41.984997 1572 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:54:41.985073 kubelet[1572]: I0913 00:54:41.985022 1572 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:54:41.985073 kubelet[1572]: I0913 00:54:41.985030 1572 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:54:41.985117 kubelet[1572]: E0913 00:54:41.985077 1572 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:54:41.986004 kubelet[1572]: E0913 00:54:41.978970 1572 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.140:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.140:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864b177a2949b3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:54:41.968208702 +0000 UTC m=+0.375177199,LastTimestamp:2025-09-13 00:54:41.968208702 +0000 UTC m=+0.375177199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:54:41.987721 kubelet[1572]: W0913 00:54:41.987655 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 13 00:54:41.987788 kubelet[1572]: E0913 00:54:41.987747 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:41.994670 kubelet[1572]: I0913 00:54:41.994635 1572 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:54:41.994670 kubelet[1572]: I0913 00:54:41.994648 1572 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:54:41.994670 kubelet[1572]: I0913 00:54:41.994678 1572 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:42.074907 kubelet[1572]: E0913 00:54:42.074880 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:42.086186 kubelet[1572]: E0913 00:54:42.086151 1572 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:54:42.175192 kubelet[1572]: E0913 00:54:42.175009 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:42.176521 kubelet[1572]: E0913 00:54:42.176474 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="400ms" Sep 13 00:54:42.275534 kubelet[1572]: E0913 00:54:42.275506 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:42.286755 kubelet[1572]: E0913 00:54:42.286705 1572 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:54:42.376055 kubelet[1572]: E0913 00:54:42.376024 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:42.477237 kubelet[1572]: E0913 00:54:42.477125 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:42.577575 kubelet[1572]: E0913 00:54:42.577530 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:42.577856 kubelet[1572]: E0913 00:54:42.577819 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="800ms" Sep 13 00:54:42.678317 kubelet[1572]: E0913 00:54:42.678269 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:42.687524 kubelet[1572]: E0913 00:54:42.687488 1572 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:54:42.779066 kubelet[1572]: E0913 00:54:42.778986 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:42.879488 kubelet[1572]: E0913 00:54:42.879452 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:42.892711 kubelet[1572]: I0913 00:54:42.892691 1572 policy_none.go:49] "None policy: Start" Sep 13 00:54:42.892808 kubelet[1572]: I0913 00:54:42.892726 1572 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:54:42.892808 kubelet[1572]: I0913 00:54:42.892753 1572 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:54:42.896172 kubelet[1572]: W0913 00:54:42.896092 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 13 00:54:42.896233 kubelet[1572]: E0913 00:54:42.896190 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:42.899469 systemd[1]: Created slice kubepods.slice. Sep 13 00:54:42.903503 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 00:54:42.906140 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 00:54:42.916557 kubelet[1572]: I0913 00:54:42.916516 1572 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:54:42.916765 kubelet[1572]: I0913 00:54:42.916747 1572 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:54:42.916816 kubelet[1572]: I0913 00:54:42.916771 1572 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:54:42.917358 kubelet[1572]: I0913 00:54:42.917038 1572 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:54:42.917962 kubelet[1572]: E0913 00:54:42.917940 1572 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:54:42.918013 kubelet[1572]: E0913 00:54:42.917990 1572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:54:43.014348 kubelet[1572]: W0913 00:54:43.014300 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 13 00:54:43.014422 kubelet[1572]: E0913 00:54:43.014359 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:43.018489 kubelet[1572]: I0913 00:54:43.018470 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:54:43.018739 kubelet[1572]: E0913 00:54:43.018714 1572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Sep 13 00:54:43.047392 kubelet[1572]: W0913 00:54:43.047322 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 13 00:54:43.047392 kubelet[1572]: E0913 00:54:43.047351 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:43.082525 kubelet[1572]: W0913 00:54:43.082459 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 13 00:54:43.082694 kubelet[1572]: E0913 00:54:43.082532 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:43.220576 kubelet[1572]: I0913 00:54:43.220524 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:54:43.220946 kubelet[1572]: E0913 00:54:43.220905 1572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Sep 13 00:54:43.379413 kubelet[1572]: E0913 00:54:43.379276 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="1.6s" Sep 13 00:54:43.496296 systemd[1]: Created slice kubepods-burstable-pod5eccaae14580dae2e2bf34e88773be2a.slice. Sep 13 00:54:43.507294 kubelet[1572]: E0913 00:54:43.507255 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:54:43.508441 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 13 00:54:43.517514 kubelet[1572]: E0913 00:54:43.517490 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:54:43.519517 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 13 00:54:43.571494 kubelet[1572]: E0913 00:54:43.571453 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:54:43.583952 kubelet[1572]: I0913 00:54:43.583892 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:54:43.584076 kubelet[1572]: I0913 00:54:43.583956 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:43.584076 kubelet[1572]: I0913 00:54:43.583987 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:43.584076 kubelet[1572]: I0913 00:54:43.584011 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5eccaae14580dae2e2bf34e88773be2a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5eccaae14580dae2e2bf34e88773be2a\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:43.584076 kubelet[1572]: I0913 00:54:43.584031 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:43.584076 kubelet[1572]: I0913 00:54:43.584048 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:43.584237 kubelet[1572]: I0913 00:54:43.584062 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:43.584237 kubelet[1572]: I0913 00:54:43.584097 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5eccaae14580dae2e2bf34e88773be2a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5eccaae14580dae2e2bf34e88773be2a\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:43.584237 kubelet[1572]: I0913 00:54:43.584113 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5eccaae14580dae2e2bf34e88773be2a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5eccaae14580dae2e2bf34e88773be2a\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:43.622974 kubelet[1572]: I0913 00:54:43.622940 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:54:43.623431 kubelet[1572]: E0913 00:54:43.623390 1572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Sep 13 00:54:43.808647 kubelet[1572]: E0913 00:54:43.808610 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:43.809339 env[1211]: time="2025-09-13T00:54:43.809302420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5eccaae14580dae2e2bf34e88773be2a,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:43.818584 kubelet[1572]: E0913 00:54:43.818548 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:43.819115 env[1211]: time="2025-09-13T00:54:43.819073406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:43.872366 kubelet[1572]: E0913 00:54:43.872324 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:43.872789 env[1211]: time="2025-09-13T00:54:43.872751836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:43.937333 kubelet[1572]: E0913 00:54:43.937210 1572 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.140:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.140:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864b177a2949b3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:54:41.968208702 +0000 UTC m=+0.375177199,LastTimestamp:2025-09-13 00:54:41.968208702 +0000 UTC m=+0.375177199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:54:43.938196 kubelet[1572]: E0913 00:54:43.938162 1572 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:44.424708 kubelet[1572]: I0913 00:54:44.424650 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:54:44.425116 kubelet[1572]: E0913 00:54:44.425070 1572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Sep 13 00:54:44.743530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3861998345.mount: Deactivated successfully. Sep 13 00:54:44.750008 env[1211]: time="2025-09-13T00:54:44.749955753Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.753526 env[1211]: time="2025-09-13T00:54:44.753490519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.754324 env[1211]: time="2025-09-13T00:54:44.754287995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.755389 env[1211]: time="2025-09-13T00:54:44.755353315Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.757903 env[1211]: time="2025-09-13T00:54:44.757870502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.759139 env[1211]: time="2025-09-13T00:54:44.759111588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.760290 env[1211]: time="2025-09-13T00:54:44.760251609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.761548 env[1211]: time="2025-09-13T00:54:44.761523985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.763457 env[1211]: time="2025-09-13T00:54:44.763436416Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.764569 env[1211]: time="2025-09-13T00:54:44.764535900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.765808 env[1211]: time="2025-09-13T00:54:44.765749723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.766427 env[1211]: time="2025-09-13T00:54:44.766406971Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:54:44.788074 env[1211]: time="2025-09-13T00:54:44.787981344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:44.788074 env[1211]: time="2025-09-13T00:54:44.788023104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:44.788074 env[1211]: time="2025-09-13T00:54:44.788033433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:44.788284 env[1211]: time="2025-09-13T00:54:44.788218729Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3fffecca915dba7f2603ab614e863027540cfa945a8a40599e26ffa1825bdfe6 pid=1615 runtime=io.containerd.runc.v2 Sep 13 00:54:44.911432 env[1211]: time="2025-09-13T00:54:44.911346233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:44.911432 env[1211]: time="2025-09-13T00:54:44.911394815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:44.911872 env[1211]: time="2025-09-13T00:54:44.911407379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:44.911872 env[1211]: time="2025-09-13T00:54:44.911803978Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f78d138ce032495d5ab68cf66a54e656e47772e5cce3fbf667dc760772709e23 pid=1633 runtime=io.containerd.runc.v2 Sep 13 00:54:44.918090 env[1211]: time="2025-09-13T00:54:44.917191059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:44.918090 env[1211]: time="2025-09-13T00:54:44.917273818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:44.918090 env[1211]: time="2025-09-13T00:54:44.917341257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:44.918090 env[1211]: time="2025-09-13T00:54:44.917561708Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f715c0c0294d54f58ea67fcc81deb74cefe854d39afcdcd834805829b898ef67 pid=1648 runtime=io.containerd.runc.v2 Sep 13 00:54:44.924472 systemd[1]: Started cri-containerd-3fffecca915dba7f2603ab614e863027540cfa945a8a40599e26ffa1825bdfe6.scope. Sep 13 00:54:44.935169 systemd[1]: Started cri-containerd-f78d138ce032495d5ab68cf66a54e656e47772e5cce3fbf667dc760772709e23.scope. Sep 13 00:54:44.960795 systemd[1]: Started cri-containerd-f715c0c0294d54f58ea67fcc81deb74cefe854d39afcdcd834805829b898ef67.scope. Sep 13 00:54:44.974056 kubelet[1572]: W0913 00:54:44.973407 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 13 00:54:44.974056 kubelet[1572]: E0913 00:54:44.973454 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:44.979842 kubelet[1572]: E0913 00:54:44.979800 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="3.2s" Sep 13 00:54:45.080912 env[1211]: time="2025-09-13T00:54:45.080866992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f78d138ce032495d5ab68cf66a54e656e47772e5cce3fbf667dc760772709e23\"" Sep 13 00:54:45.082289 kubelet[1572]: E0913 00:54:45.082261 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:45.084808 env[1211]: time="2025-09-13T00:54:45.084781905Z" level=info msg="CreateContainer within sandbox \"f78d138ce032495d5ab68cf66a54e656e47772e5cce3fbf667dc760772709e23\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:54:45.086146 env[1211]: time="2025-09-13T00:54:45.085794902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5eccaae14580dae2e2bf34e88773be2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fffecca915dba7f2603ab614e863027540cfa945a8a40599e26ffa1825bdfe6\"" Sep 13 00:54:45.086404 kubelet[1572]: E0913 00:54:45.086377 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:45.087766 env[1211]: time="2025-09-13T00:54:45.087727536Z" level=info msg="CreateContainer within sandbox \"3fffecca915dba7f2603ab614e863027540cfa945a8a40599e26ffa1825bdfe6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:54:45.096135 env[1211]: time="2025-09-13T00:54:45.096060054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"f715c0c0294d54f58ea67fcc81deb74cefe854d39afcdcd834805829b898ef67\"" Sep 13 00:54:45.096813 kubelet[1572]: E0913 00:54:45.096784 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:45.098518 env[1211]: time="2025-09-13T00:54:45.098489368Z" level=info msg="CreateContainer within sandbox \"f715c0c0294d54f58ea67fcc81deb74cefe854d39afcdcd834805829b898ef67\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:54:45.110078 kubelet[1572]: W0913 00:54:45.110025 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 13 00:54:45.110128 kubelet[1572]: E0913 00:54:45.110083 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:45.821017 kubelet[1572]: W0913 00:54:45.820952 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 13 00:54:45.821133 kubelet[1572]: E0913 00:54:45.821029 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:45.950435 env[1211]: time="2025-09-13T00:54:45.950364738Z" level=info msg="CreateContainer within sandbox \"f78d138ce032495d5ab68cf66a54e656e47772e5cce3fbf667dc760772709e23\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8725d636eed0de246d77ca391fb6a2a8106acb0e51498972cbf62cb1a8c3586e\"" Sep 13 00:54:45.951059 env[1211]: time="2025-09-13T00:54:45.951034920Z" level=info msg="StartContainer for \"8725d636eed0de246d77ca391fb6a2a8106acb0e51498972cbf62cb1a8c3586e\"" Sep 13 00:54:45.962517 env[1211]: time="2025-09-13T00:54:45.962451894Z" level=info msg="CreateContainer within sandbox \"3fffecca915dba7f2603ab614e863027540cfa945a8a40599e26ffa1825bdfe6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"26969bf0699484ab738ce58bb9292a634ee4ff4bc086d3495c23ccb6050c7dc3\"" Sep 13 00:54:45.963067 env[1211]: time="2025-09-13T00:54:45.963041391Z" level=info msg="StartContainer for \"26969bf0699484ab738ce58bb9292a634ee4ff4bc086d3495c23ccb6050c7dc3\"" Sep 13 00:54:45.966060 env[1211]: time="2025-09-13T00:54:45.966015335Z" level=info msg="CreateContainer within sandbox \"f715c0c0294d54f58ea67fcc81deb74cefe854d39afcdcd834805829b898ef67\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bfcd672ec6f2410849bec6df16f0d0e0fb9021f82310f61a7603b69a78dc643a\"" Sep 13 00:54:45.966469 env[1211]: time="2025-09-13T00:54:45.966440118Z" level=info msg="StartContainer for \"bfcd672ec6f2410849bec6df16f0d0e0fb9021f82310f61a7603b69a78dc643a\"" Sep 13 00:54:45.967569 systemd[1]: Started cri-containerd-8725d636eed0de246d77ca391fb6a2a8106acb0e51498972cbf62cb1a8c3586e.scope. Sep 13 00:54:45.989208 systemd[1]: Started cri-containerd-bfcd672ec6f2410849bec6df16f0d0e0fb9021f82310f61a7603b69a78dc643a.scope. Sep 13 00:54:46.079251 systemd[1]: Started cri-containerd-26969bf0699484ab738ce58bb9292a634ee4ff4bc086d3495c23ccb6050c7dc3.scope. Sep 13 00:54:46.089992 kubelet[1572]: I0913 00:54:46.089630 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:54:46.089992 kubelet[1572]: E0913 00:54:46.089966 1572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Sep 13 00:54:46.122798 env[1211]: time="2025-09-13T00:54:46.116496344Z" level=info msg="StartContainer for \"8725d636eed0de246d77ca391fb6a2a8106acb0e51498972cbf62cb1a8c3586e\" returns successfully" Sep 13 00:54:46.126454 env[1211]: time="2025-09-13T00:54:46.124500707Z" level=info msg="StartContainer for \"bfcd672ec6f2410849bec6df16f0d0e0fb9021f82310f61a7603b69a78dc643a\" returns successfully" Sep 13 00:54:46.155344 env[1211]: time="2025-09-13T00:54:46.155303638Z" level=info msg="StartContainer for \"26969bf0699484ab738ce58bb9292a634ee4ff4bc086d3495c23ccb6050c7dc3\" returns successfully" Sep 13 00:54:46.163621 kubelet[1572]: W0913 00:54:46.163300 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Sep 13 00:54:46.163621 kubelet[1572]: E0913 00:54:46.163439 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:54:47.123123 kubelet[1572]: E0913 00:54:47.122858 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:54:47.123123 kubelet[1572]: E0913 00:54:47.123033 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:47.125415 kubelet[1572]: E0913 00:54:47.125399 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:54:47.125629 kubelet[1572]: E0913 00:54:47.125617 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:47.127867 kubelet[1572]: E0913 00:54:47.127848 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:54:47.128058 kubelet[1572]: E0913 00:54:47.128038 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:47.993869 kubelet[1572]: E0913 00:54:47.993833 1572 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 13 00:54:48.130210 kubelet[1572]: E0913 00:54:48.130182 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:54:48.130524 kubelet[1572]: E0913 00:54:48.130300 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:48.130524 kubelet[1572]: E0913 00:54:48.130371 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:54:48.130524 kubelet[1572]: E0913 00:54:48.130477 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:48.130605 kubelet[1572]: E0913 00:54:48.130527 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:54:48.130688 kubelet[1572]: E0913 00:54:48.130650 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:48.183217 kubelet[1572]: E0913 00:54:48.183184 1572 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:54:48.339737 kubelet[1572]: E0913 00:54:48.339675 1572 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 13 00:54:48.771307 kubelet[1572]: E0913 00:54:48.771199 1572 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 13 00:54:49.131355 kubelet[1572]: E0913 00:54:49.131325 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:54:49.131691 kubelet[1572]: E0913 00:54:49.131437 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:49.131691 kubelet[1572]: E0913 00:54:49.131458 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:54:49.131691 kubelet[1572]: E0913 00:54:49.131587 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:49.291544 kubelet[1572]: I0913 00:54:49.291511 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:54:49.299756 kubelet[1572]: I0913 00:54:49.299730 1572 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:54:49.299756 kubelet[1572]: E0913 00:54:49.299754 1572 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:54:49.306951 kubelet[1572]: E0913 00:54:49.306909 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:49.407579 kubelet[1572]: E0913 00:54:49.407487 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:49.507904 kubelet[1572]: E0913 00:54:49.507846 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:49.608345 kubelet[1572]: E0913 00:54:49.608311 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:49.709401 kubelet[1572]: E0913 00:54:49.709316 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:49.720261 systemd[1]: Reloading. Sep 13 00:54:49.785146 /usr/lib/systemd/system-generators/torcx-generator[1863]: time="2025-09-13T00:54:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:54:49.785181 /usr/lib/systemd/system-generators/torcx-generator[1863]: time="2025-09-13T00:54:49Z" level=info msg="torcx already run" Sep 13 00:54:49.810186 kubelet[1572]: E0913 00:54:49.810141 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:49.897924 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:54:49.897940 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:54:49.911212 kubelet[1572]: E0913 00:54:49.911172 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:54:49.914503 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:54:50.008424 systemd[1]: Stopping kubelet.service... Sep 13 00:54:50.029097 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:54:50.029250 systemd[1]: Stopped kubelet.service. Sep 13 00:54:50.030630 systemd[1]: Starting kubelet.service... Sep 13 00:54:50.120896 systemd[1]: Started kubelet.service. Sep 13 00:54:50.167476 kubelet[1909]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:50.167476 kubelet[1909]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:54:50.167476 kubelet[1909]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:54:50.167861 kubelet[1909]: I0913 00:54:50.167526 1909 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:54:50.173476 kubelet[1909]: I0913 00:54:50.173440 1909 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:54:50.173476 kubelet[1909]: I0913 00:54:50.173461 1909 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:54:50.173818 kubelet[1909]: I0913 00:54:50.173785 1909 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:54:50.174907 kubelet[1909]: I0913 00:54:50.174879 1909 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:54:50.178940 kubelet[1909]: I0913 00:54:50.178892 1909 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:54:50.181590 kubelet[1909]: E0913 00:54:50.181557 1909 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:54:50.181652 kubelet[1909]: I0913 00:54:50.181595 1909 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:54:50.184962 kubelet[1909]: I0913 00:54:50.184933 1909 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:54:50.185180 kubelet[1909]: I0913 00:54:50.185143 1909 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:54:50.185360 kubelet[1909]: I0913 00:54:50.185170 1909 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:54:50.185482 kubelet[1909]: I0913 00:54:50.185368 1909 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:54:50.185482 kubelet[1909]: I0913 00:54:50.185380 1909 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:54:50.185482 kubelet[1909]: I0913 00:54:50.185430 1909 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:50.185578 kubelet[1909]: I0913 00:54:50.185570 1909 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:54:50.185631 kubelet[1909]: I0913 00:54:50.185594 1909 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:54:50.185631 kubelet[1909]: I0913 00:54:50.185615 1909 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:54:50.185631 kubelet[1909]: I0913 00:54:50.185627 1909 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:54:50.189114 kubelet[1909]: I0913 00:54:50.187921 1909 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:54:50.189114 kubelet[1909]: I0913 00:54:50.188345 1909 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:54:50.189114 kubelet[1909]: I0913 00:54:50.188818 1909 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:54:50.189114 kubelet[1909]: I0913 00:54:50.188846 1909 server.go:1287] "Started kubelet" Sep 13 00:54:50.191550 kubelet[1909]: I0913 00:54:50.191502 1909 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:54:50.192023 kubelet[1909]: I0913 00:54:50.191996 1909 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:54:50.193065 kubelet[1909]: I0913 00:54:50.193044 1909 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:54:50.197759 kubelet[1909]: E0913 00:54:50.196125 1909 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:54:50.201764 kubelet[1909]: I0913 00:54:50.200765 1909 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:54:50.201764 kubelet[1909]: I0913 00:54:50.201036 1909 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:54:50.203162 kubelet[1909]: I0913 00:54:50.203140 1909 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:54:50.203325 kubelet[1909]: I0913 00:54:50.203308 1909 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:54:50.203458 kubelet[1909]: I0913 00:54:50.203444 1909 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:54:50.205480 kubelet[1909]: I0913 00:54:50.205446 1909 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:54:50.206586 kubelet[1909]: I0913 00:54:50.206563 1909 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:54:50.206586 kubelet[1909]: I0913 00:54:50.206581 1909 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:54:50.206815 kubelet[1909]: I0913 00:54:50.206779 1909 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:54:50.210526 kubelet[1909]: I0913 00:54:50.210472 1909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:54:50.211259 kubelet[1909]: I0913 00:54:50.211233 1909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:54:50.211308 kubelet[1909]: I0913 00:54:50.211265 1909 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:54:50.211308 kubelet[1909]: I0913 00:54:50.211288 1909 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:54:50.211308 kubelet[1909]: I0913 00:54:50.211297 1909 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:54:50.211383 kubelet[1909]: E0913 00:54:50.211344 1909 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:54:50.237826 kubelet[1909]: I0913 00:54:50.237783 1909 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:54:50.237826 kubelet[1909]: I0913 00:54:50.237814 1909 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:54:50.237995 kubelet[1909]: I0913 00:54:50.237852 1909 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:54:50.238023 kubelet[1909]: I0913 00:54:50.238012 1909 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:54:50.238063 kubelet[1909]: I0913 00:54:50.238022 1909 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:54:50.238063 kubelet[1909]: I0913 00:54:50.238039 1909 policy_none.go:49] "None policy: Start" Sep 13 00:54:50.238063 kubelet[1909]: I0913 00:54:50.238048 1909 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:54:50.238063 kubelet[1909]: I0913 00:54:50.238057 1909 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:54:50.238164 kubelet[1909]: I0913 00:54:50.238151 1909 state_mem.go:75] "Updated machine memory state" Sep 13 00:54:50.241353 kubelet[1909]: I0913 00:54:50.241312 1909 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:54:50.241513 kubelet[1909]: I0913 00:54:50.241490 1909 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:54:50.241556 kubelet[1909]: I0913 00:54:50.241506 1909 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:54:50.241829 kubelet[1909]: I0913 00:54:50.241703 1909 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:54:50.242415 kubelet[1909]: E0913 00:54:50.242392 1909 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:54:50.312377 kubelet[1909]: I0913 00:54:50.312305 1909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:54:50.312553 kubelet[1909]: I0913 00:54:50.312385 1909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:50.312553 kubelet[1909]: I0913 00:54:50.312480 1909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:50.346856 kubelet[1909]: I0913 00:54:50.346811 1909 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:54:50.353780 kubelet[1909]: I0913 00:54:50.353749 1909 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 00:54:50.353966 kubelet[1909]: I0913 00:54:50.353832 1909 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:54:50.405774 kubelet[1909]: I0913 00:54:50.405709 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5eccaae14580dae2e2bf34e88773be2a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5eccaae14580dae2e2bf34e88773be2a\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:50.405774 kubelet[1909]: I0913 00:54:50.405759 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5eccaae14580dae2e2bf34e88773be2a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5eccaae14580dae2e2bf34e88773be2a\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:50.405774 kubelet[1909]: I0913 00:54:50.405786 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:50.406076 kubelet[1909]: I0913 00:54:50.405820 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:54:50.406076 kubelet[1909]: I0913 00:54:50.405892 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5eccaae14580dae2e2bf34e88773be2a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5eccaae14580dae2e2bf34e88773be2a\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:50.406076 kubelet[1909]: I0913 00:54:50.405947 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:50.406076 kubelet[1909]: I0913 00:54:50.405973 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:50.406076 kubelet[1909]: I0913 00:54:50.405998 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:50.406220 kubelet[1909]: I0913 00:54:50.406049 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:54:50.554946 sudo[1943]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:54:50.555188 sudo[1943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:54:50.621350 kubelet[1909]: E0913 00:54:50.621251 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:50.621489 kubelet[1909]: E0913 00:54:50.621260 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:50.621627 kubelet[1909]: E0913 00:54:50.621602 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:51.051216 sudo[1943]: pam_unix(sudo:session): session closed for user root Sep 13 00:54:51.186446 kubelet[1909]: I0913 00:54:51.186413 1909 apiserver.go:52] "Watching apiserver" Sep 13 00:54:51.204189 kubelet[1909]: I0913 00:54:51.204138 1909 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:54:51.222624 kubelet[1909]: I0913 00:54:51.222599 1909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:54:51.222806 kubelet[1909]: E0913 00:54:51.222769 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:51.223077 kubelet[1909]: I0913 00:54:51.223061 1909 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:51.480754 kubelet[1909]: E0913 00:54:51.480603 1909 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:54:51.480915 kubelet[1909]: E0913 00:54:51.480811 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:51.482145 kubelet[1909]: E0913 00:54:51.482090 1909 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:54:51.482336 kubelet[1909]: E0913 00:54:51.482262 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:51.489363 kubelet[1909]: I0913 00:54:51.489302 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.489279792 podStartE2EDuration="1.489279792s" podCreationTimestamp="2025-09-13 00:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:51.482782047 +0000 UTC m=+1.358915869" watchObservedRunningTime="2025-09-13 00:54:51.489279792 +0000 UTC m=+1.365413613" Sep 13 00:54:51.497352 kubelet[1909]: I0913 00:54:51.497300 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.497282756 podStartE2EDuration="1.497282756s" podCreationTimestamp="2025-09-13 00:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:51.489457189 +0000 UTC m=+1.365591010" watchObservedRunningTime="2025-09-13 00:54:51.497282756 +0000 UTC m=+1.373416577" Sep 13 00:54:51.497545 kubelet[1909]: I0913 00:54:51.497370 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.497366496 podStartE2EDuration="1.497366496s" podCreationTimestamp="2025-09-13 00:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:51.497275462 +0000 UTC m=+1.373409283" watchObservedRunningTime="2025-09-13 00:54:51.497366496 +0000 UTC m=+1.373500317" Sep 13 00:54:52.224282 kubelet[1909]: E0913 00:54:52.224254 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:52.224588 kubelet[1909]: E0913 00:54:52.224424 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:52.981200 sudo[1309]: pam_unix(sudo:session): session closed for user root Sep 13 00:54:52.982421 sshd[1305]: pam_unix(sshd:session): session closed for user core Sep 13 00:54:52.984595 systemd[1]: sshd@4-10.0.0.140:22-10.0.0.1:41552.service: Deactivated successfully. Sep 13 00:54:52.985372 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:54:52.985509 systemd[1]: session-5.scope: Consumed 4.761s CPU time. Sep 13 00:54:52.985975 systemd-logind[1197]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:54:52.986847 systemd-logind[1197]: Removed session 5. Sep 13 00:54:53.225438 kubelet[1909]: E0913 00:54:53.225398 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:55.572107 update_engine[1199]: I0913 00:54:55.572044 1199 update_attempter.cc:509] Updating boot flags... Sep 13 00:54:57.040335 kubelet[1909]: I0913 00:54:57.040282 1909 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:54:57.040850 env[1211]: time="2025-09-13T00:54:57.040644641Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:54:57.041084 kubelet[1909]: I0913 00:54:57.040930 1909 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:54:57.473294 kubelet[1909]: E0913 00:54:57.473264 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:57.928010 systemd[1]: Created slice kubepods-besteffort-pod2aee4fc9_d21c_443b_9a6f_2422ec0d1933.slice. Sep 13 00:54:57.947866 systemd[1]: Created slice kubepods-burstable-podaf42ab2b_6b79_49a4_849d_801bfc59adce.slice. Sep 13 00:54:57.989719 kubelet[1909]: I0913 00:54:57.989643 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2aee4fc9-d21c-443b-9a6f-2422ec0d1933-xtables-lock\") pod \"kube-proxy-zdk2s\" (UID: \"2aee4fc9-d21c-443b-9a6f-2422ec0d1933\") " pod="kube-system/kube-proxy-zdk2s" Sep 13 00:54:57.989719 kubelet[1909]: I0913 00:54:57.989708 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2aee4fc9-d21c-443b-9a6f-2422ec0d1933-kube-proxy\") pod \"kube-proxy-zdk2s\" (UID: \"2aee4fc9-d21c-443b-9a6f-2422ec0d1933\") " pod="kube-system/kube-proxy-zdk2s" Sep 13 00:54:58.090628 kubelet[1909]: I0913 00:54:58.090577 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-xtables-lock\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.090628 kubelet[1909]: I0913 00:54:58.090611 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-hostproc\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.090628 kubelet[1909]: I0913 00:54:58.090627 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af42ab2b-6b79-49a4-849d-801bfc59adce-hubble-tls\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.090628 kubelet[1909]: I0913 00:54:58.090640 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-host-proc-sys-net\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.091075 kubelet[1909]: I0913 00:54:58.090674 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2aee4fc9-d21c-443b-9a6f-2422ec0d1933-lib-modules\") pod \"kube-proxy-zdk2s\" (UID: \"2aee4fc9-d21c-443b-9a6f-2422ec0d1933\") " pod="kube-system/kube-proxy-zdk2s" Sep 13 00:54:58.091075 kubelet[1909]: I0913 00:54:58.090691 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-bpf-maps\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.091075 kubelet[1909]: I0913 00:54:58.090706 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-etc-cni-netd\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.091075 kubelet[1909]: I0913 00:54:58.090755 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af42ab2b-6b79-49a4-849d-801bfc59adce-cilium-config-path\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.091075 kubelet[1909]: I0913 00:54:58.090810 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9mpf\" (UniqueName: \"kubernetes.io/projected/2aee4fc9-d21c-443b-9a6f-2422ec0d1933-kube-api-access-q9mpf\") pod \"kube-proxy-zdk2s\" (UID: \"2aee4fc9-d21c-443b-9a6f-2422ec0d1933\") " pod="kube-system/kube-proxy-zdk2s" Sep 13 00:54:58.091075 kubelet[1909]: I0913 00:54:58.090836 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-cni-path\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.091216 kubelet[1909]: I0913 00:54:58.090851 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-host-proc-sys-kernel\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.091216 kubelet[1909]: I0913 00:54:58.090879 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9j9j\" (UniqueName: \"kubernetes.io/projected/af42ab2b-6b79-49a4-849d-801bfc59adce-kube-api-access-w9j9j\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.091216 kubelet[1909]: I0913 00:54:58.090902 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-cilium-run\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.091216 kubelet[1909]: I0913 00:54:58.090918 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-cilium-cgroup\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.091216 kubelet[1909]: I0913 00:54:58.090931 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-lib-modules\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.091335 kubelet[1909]: I0913 00:54:58.090946 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af42ab2b-6b79-49a4-849d-801bfc59adce-clustermesh-secrets\") pod \"cilium-d2m9r\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " pod="kube-system/cilium-d2m9r" Sep 13 00:54:58.192629 kubelet[1909]: I0913 00:54:58.192509 1909 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:54:58.230399 kubelet[1909]: I0913 00:54:58.230342 1909 status_manager.go:890] "Failed to get status for pod" podUID="6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e" pod="kube-system/cilium-operator-6c4d7847fc-nbw54" err="pods \"cilium-operator-6c4d7847fc-nbw54\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Sep 13 00:54:58.231041 kubelet[1909]: E0913 00:54:58.231010 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:58.234507 systemd[1]: Created slice kubepods-besteffort-pod6ba12dc0_e9dc_47ea_910a_2801a9cbfe8e.slice. Sep 13 00:54:58.246244 kubelet[1909]: E0913 00:54:58.246208 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:58.246968 env[1211]: time="2025-09-13T00:54:58.246921764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zdk2s,Uid:2aee4fc9-d21c-443b-9a6f-2422ec0d1933,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:58.250379 kubelet[1909]: E0913 00:54:58.250346 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:58.252997 env[1211]: time="2025-09-13T00:54:58.252956615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d2m9r,Uid:af42ab2b-6b79-49a4-849d-801bfc59adce,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:58.270755 env[1211]: time="2025-09-13T00:54:58.270575879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:58.270755 env[1211]: time="2025-09-13T00:54:58.270609573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:58.270755 env[1211]: time="2025-09-13T00:54:58.270618640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:58.270895 env[1211]: time="2025-09-13T00:54:58.270815201Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2 pid=2023 runtime=io.containerd.runc.v2 Sep 13 00:54:58.271012 env[1211]: time="2025-09-13T00:54:58.270867661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:58.271012 env[1211]: time="2025-09-13T00:54:58.270893129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:58.271012 env[1211]: time="2025-09-13T00:54:58.270902226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:58.271242 env[1211]: time="2025-09-13T00:54:58.271165865Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ea3c73fe0fc16aa818898fbe5fb86f280e61a903243e8cf036c976bfacff16a pid=2024 runtime=io.containerd.runc.v2 Sep 13 00:54:58.280830 systemd[1]: Started cri-containerd-9ea3c73fe0fc16aa818898fbe5fb86f280e61a903243e8cf036c976bfacff16a.scope. Sep 13 00:54:58.286838 systemd[1]: Started cri-containerd-5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2.scope. Sep 13 00:54:58.293493 kubelet[1909]: I0913 00:54:58.293134 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5ck4\" (UniqueName: \"kubernetes.io/projected/6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e-kube-api-access-x5ck4\") pod \"cilium-operator-6c4d7847fc-nbw54\" (UID: \"6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e\") " pod="kube-system/cilium-operator-6c4d7847fc-nbw54" Sep 13 00:54:58.293493 kubelet[1909]: I0913 00:54:58.293185 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-nbw54\" (UID: \"6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e\") " pod="kube-system/cilium-operator-6c4d7847fc-nbw54" Sep 13 00:54:58.305713 env[1211]: time="2025-09-13T00:54:58.305667630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zdk2s,Uid:2aee4fc9-d21c-443b-9a6f-2422ec0d1933,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ea3c73fe0fc16aa818898fbe5fb86f280e61a903243e8cf036c976bfacff16a\"" Sep 13 00:54:58.306427 kubelet[1909]: E0913 00:54:58.306405 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:58.308173 env[1211]: time="2025-09-13T00:54:58.308134194Z" level=info msg="CreateContainer within sandbox \"9ea3c73fe0fc16aa818898fbe5fb86f280e61a903243e8cf036c976bfacff16a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:54:58.311198 env[1211]: time="2025-09-13T00:54:58.310999662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d2m9r,Uid:af42ab2b-6b79-49a4-849d-801bfc59adce,Namespace:kube-system,Attempt:0,} returns sandbox id \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\"" Sep 13 00:54:58.311690 kubelet[1909]: E0913 00:54:58.311649 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:58.313303 env[1211]: time="2025-09-13T00:54:58.313275335Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:54:58.326799 env[1211]: time="2025-09-13T00:54:58.326750654Z" level=info msg="CreateContainer within sandbox \"9ea3c73fe0fc16aa818898fbe5fb86f280e61a903243e8cf036c976bfacff16a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a35a75c01590172f11561b2ea63b4a6640c0154860532dfc2236d72db3e5e97c\"" Sep 13 00:54:58.327218 env[1211]: time="2025-09-13T00:54:58.327197850Z" level=info msg="StartContainer for \"a35a75c01590172f11561b2ea63b4a6640c0154860532dfc2236d72db3e5e97c\"" Sep 13 00:54:58.343858 systemd[1]: Started cri-containerd-a35a75c01590172f11561b2ea63b4a6640c0154860532dfc2236d72db3e5e97c.scope. Sep 13 00:54:58.373436 env[1211]: time="2025-09-13T00:54:58.373397514Z" level=info msg="StartContainer for \"a35a75c01590172f11561b2ea63b4a6640c0154860532dfc2236d72db3e5e97c\" returns successfully" Sep 13 00:54:58.538731 kubelet[1909]: E0913 00:54:58.538601 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:58.539137 env[1211]: time="2025-09-13T00:54:58.539061029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nbw54,Uid:6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e,Namespace:kube-system,Attempt:0,}" Sep 13 00:54:58.554595 env[1211]: time="2025-09-13T00:54:58.554505151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:54:58.554747 env[1211]: time="2025-09-13T00:54:58.554601032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:54:58.554747 env[1211]: time="2025-09-13T00:54:58.554627913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:54:58.555086 env[1211]: time="2025-09-13T00:54:58.554978766Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c09c7dedb553c443414607989bc99c1a621dff2a68de644e2aa091b1bc399c8 pid=2174 runtime=io.containerd.runc.v2 Sep 13 00:54:58.565999 systemd[1]: Started cri-containerd-7c09c7dedb553c443414607989bc99c1a621dff2a68de644e2aa091b1bc399c8.scope. Sep 13 00:54:58.600201 env[1211]: time="2025-09-13T00:54:58.600143474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nbw54,Uid:6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c09c7dedb553c443414607989bc99c1a621dff2a68de644e2aa091b1bc399c8\"" Sep 13 00:54:58.602053 kubelet[1909]: E0913 00:54:58.602020 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:54:59.234928 kubelet[1909]: E0913 00:54:59.234904 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:00.333497 kubelet[1909]: E0913 00:55:00.333412 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:00.344353 kubelet[1909]: I0913 00:55:00.344221 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zdk2s" podStartSLOduration=3.344135981 podStartE2EDuration="3.344135981s" podCreationTimestamp="2025-09-13 00:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:54:59.242297105 +0000 UTC m=+9.118430926" watchObservedRunningTime="2025-09-13 00:55:00.344135981 +0000 UTC m=+10.220269802" Sep 13 00:55:00.809881 kubelet[1909]: E0913 00:55:00.809855 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:01.238066 kubelet[1909]: E0913 00:55:01.237953 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:01.238207 kubelet[1909]: E0913 00:55:01.238131 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:02.239292 kubelet[1909]: E0913 00:55:02.239259 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:06.188905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2525851861.mount: Deactivated successfully. Sep 13 00:55:10.003504 env[1211]: time="2025-09-13T00:55:10.003434077Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:10.005258 env[1211]: time="2025-09-13T00:55:10.005220038Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:10.008196 env[1211]: time="2025-09-13T00:55:10.008162758Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:10.008779 env[1211]: time="2025-09-13T00:55:10.008749652Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:55:10.010021 env[1211]: time="2025-09-13T00:55:10.009975760Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:55:10.011355 env[1211]: time="2025-09-13T00:55:10.011320001Z" level=info msg="CreateContainer within sandbox \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:55:10.029938 env[1211]: time="2025-09-13T00:55:10.029877010Z" level=info msg="CreateContainer within sandbox \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff\"" Sep 13 00:55:10.030506 env[1211]: time="2025-09-13T00:55:10.030463714Z" level=info msg="StartContainer for \"943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff\"" Sep 13 00:55:10.049754 systemd[1]: Started cri-containerd-943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff.scope. Sep 13 00:55:10.084065 systemd[1]: cri-containerd-943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff.scope: Deactivated successfully. Sep 13 00:55:10.279262 env[1211]: time="2025-09-13T00:55:10.279139862Z" level=info msg="StartContainer for \"943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff\" returns successfully" Sep 13 00:55:10.525275 env[1211]: time="2025-09-13T00:55:10.525228139Z" level=info msg="shim disconnected" id=943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff Sep 13 00:55:10.525275 env[1211]: time="2025-09-13T00:55:10.525266311Z" level=warning msg="cleaning up after shim disconnected" id=943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff namespace=k8s.io Sep 13 00:55:10.525275 env[1211]: time="2025-09-13T00:55:10.525275018Z" level=info msg="cleaning up dead shim" Sep 13 00:55:10.531211 env[1211]: time="2025-09-13T00:55:10.531111784Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2352 runtime=io.containerd.runc.v2\n" Sep 13 00:55:11.026204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff-rootfs.mount: Deactivated successfully. Sep 13 00:55:11.285469 kubelet[1909]: E0913 00:55:11.285170 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:11.286693 env[1211]: time="2025-09-13T00:55:11.286644038Z" level=info msg="CreateContainer within sandbox \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:55:11.299355 env[1211]: time="2025-09-13T00:55:11.299305435Z" level=info msg="CreateContainer within sandbox \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724\"" Sep 13 00:55:11.300084 env[1211]: time="2025-09-13T00:55:11.300062599Z" level=info msg="StartContainer for \"25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724\"" Sep 13 00:55:11.314174 systemd[1]: Started cri-containerd-25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724.scope. Sep 13 00:55:11.336854 env[1211]: time="2025-09-13T00:55:11.336789623Z" level=info msg="StartContainer for \"25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724\" returns successfully" Sep 13 00:55:11.345882 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:55:11.346069 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:55:11.346247 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:55:11.347507 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:55:11.347757 systemd[1]: cri-containerd-25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724.scope: Deactivated successfully. Sep 13 00:55:11.355315 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:55:11.375066 env[1211]: time="2025-09-13T00:55:11.375022951Z" level=info msg="shim disconnected" id=25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724 Sep 13 00:55:11.375254 env[1211]: time="2025-09-13T00:55:11.375223639Z" level=warning msg="cleaning up after shim disconnected" id=25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724 namespace=k8s.io Sep 13 00:55:11.375254 env[1211]: time="2025-09-13T00:55:11.375243526Z" level=info msg="cleaning up dead shim" Sep 13 00:55:11.381367 env[1211]: time="2025-09-13T00:55:11.381333597Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2417 runtime=io.containerd.runc.v2\n" Sep 13 00:55:12.026156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724-rootfs.mount: Deactivated successfully. Sep 13 00:55:12.201857 env[1211]: time="2025-09-13T00:55:12.201813941Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:12.203590 env[1211]: time="2025-09-13T00:55:12.203539628Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:12.205104 env[1211]: time="2025-09-13T00:55:12.205080006Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:55:12.205523 env[1211]: time="2025-09-13T00:55:12.205498053Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:55:12.210072 env[1211]: time="2025-09-13T00:55:12.210043414Z" level=info msg="CreateContainer within sandbox \"7c09c7dedb553c443414607989bc99c1a621dff2a68de644e2aa091b1bc399c8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:55:12.221129 env[1211]: time="2025-09-13T00:55:12.221078777Z" level=info msg="CreateContainer within sandbox \"7c09c7dedb553c443414607989bc99c1a621dff2a68de644e2aa091b1bc399c8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\"" Sep 13 00:55:12.221681 env[1211]: time="2025-09-13T00:55:12.221609164Z" level=info msg="StartContainer for \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\"" Sep 13 00:55:12.240771 systemd[1]: Started cri-containerd-bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8.scope. Sep 13 00:55:12.265252 env[1211]: time="2025-09-13T00:55:12.265216197Z" level=info msg="StartContainer for \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\" returns successfully" Sep 13 00:55:12.288438 kubelet[1909]: E0913 00:55:12.288349 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:12.290124 kubelet[1909]: E0913 00:55:12.290053 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:12.295101 env[1211]: time="2025-09-13T00:55:12.291864503Z" level=info msg="CreateContainer within sandbox \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:55:12.316223 kubelet[1909]: I0913 00:55:12.316168 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-nbw54" podStartSLOduration=0.71032667 podStartE2EDuration="14.316147058s" podCreationTimestamp="2025-09-13 00:54:58 +0000 UTC" firstStartedPulling="2025-09-13 00:54:58.603085376 +0000 UTC m=+8.479219197" lastFinishedPulling="2025-09-13 00:55:12.208905764 +0000 UTC m=+22.085039585" observedRunningTime="2025-09-13 00:55:12.298452978 +0000 UTC m=+22.174586799" watchObservedRunningTime="2025-09-13 00:55:12.316147058 +0000 UTC m=+22.192280879" Sep 13 00:55:12.318890 env[1211]: time="2025-09-13T00:55:12.318841017Z" level=info msg="CreateContainer within sandbox \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84\"" Sep 13 00:55:12.319438 env[1211]: time="2025-09-13T00:55:12.319407041Z" level=info msg="StartContainer for \"a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84\"" Sep 13 00:55:12.341580 systemd[1]: Started cri-containerd-a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84.scope. Sep 13 00:55:12.370873 env[1211]: time="2025-09-13T00:55:12.370825901Z" level=info msg="StartContainer for \"a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84\" returns successfully" Sep 13 00:55:12.372774 systemd[1]: cri-containerd-a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84.scope: Deactivated successfully. Sep 13 00:55:12.641648 env[1211]: time="2025-09-13T00:55:12.641590477Z" level=info msg="shim disconnected" id=a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84 Sep 13 00:55:12.641849 env[1211]: time="2025-09-13T00:55:12.641650770Z" level=warning msg="cleaning up after shim disconnected" id=a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84 namespace=k8s.io Sep 13 00:55:12.641849 env[1211]: time="2025-09-13T00:55:12.641679233Z" level=info msg="cleaning up dead shim" Sep 13 00:55:12.650136 env[1211]: time="2025-09-13T00:55:12.650077494Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2514 runtime=io.containerd.runc.v2\n" Sep 13 00:55:13.026316 systemd[1]: run-containerd-runc-k8s.io-bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8-runc.bLXmGn.mount: Deactivated successfully. Sep 13 00:55:13.293510 kubelet[1909]: E0913 00:55:13.293375 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:13.293896 kubelet[1909]: E0913 00:55:13.293874 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:13.295289 env[1211]: time="2025-09-13T00:55:13.295243271Z" level=info msg="CreateContainer within sandbox \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:55:13.313094 env[1211]: time="2025-09-13T00:55:13.313039786Z" level=info msg="CreateContainer within sandbox \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81\"" Sep 13 00:55:13.313461 env[1211]: time="2025-09-13T00:55:13.313426053Z" level=info msg="StartContainer for \"a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81\"" Sep 13 00:55:13.329936 systemd[1]: Started cri-containerd-a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81.scope. Sep 13 00:55:13.358007 systemd[1]: cri-containerd-a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81.scope: Deactivated successfully. Sep 13 00:55:13.359119 env[1211]: time="2025-09-13T00:55:13.358914145Z" level=info msg="StartContainer for \"a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81\" returns successfully" Sep 13 00:55:13.394528 env[1211]: time="2025-09-13T00:55:13.385874197Z" level=info msg="shim disconnected" id=a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81 Sep 13 00:55:13.394528 env[1211]: time="2025-09-13T00:55:13.385916858Z" level=warning msg="cleaning up after shim disconnected" id=a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81 namespace=k8s.io Sep 13 00:55:13.394528 env[1211]: time="2025-09-13T00:55:13.385924813Z" level=info msg="cleaning up dead shim" Sep 13 00:55:13.398790 env[1211]: time="2025-09-13T00:55:13.398750007Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:55:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2571 runtime=io.containerd.runc.v2\n" Sep 13 00:55:13.737093 systemd[1]: Started sshd@5-10.0.0.140:22-10.0.0.1:38162.service. Sep 13 00:55:13.774885 sshd[2585]: Accepted publickey for core from 10.0.0.1 port 38162 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:13.775889 sshd[2585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:13.779253 systemd-logind[1197]: New session 6 of user core. Sep 13 00:55:13.780044 systemd[1]: Started session-6.scope. Sep 13 00:55:13.891093 sshd[2585]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:13.893333 systemd[1]: sshd@5-10.0.0.140:22-10.0.0.1:38162.service: Deactivated successfully. Sep 13 00:55:13.894013 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:55:13.894566 systemd-logind[1197]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:55:13.895220 systemd-logind[1197]: Removed session 6. Sep 13 00:55:14.026423 systemd[1]: run-containerd-runc-k8s.io-a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81-runc.e5N5ec.mount: Deactivated successfully. Sep 13 00:55:14.026514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81-rootfs.mount: Deactivated successfully. Sep 13 00:55:14.298068 kubelet[1909]: E0913 00:55:14.298023 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:14.300159 env[1211]: time="2025-09-13T00:55:14.299652231Z" level=info msg="CreateContainer within sandbox \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:55:14.316149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663921897.mount: Deactivated successfully. Sep 13 00:55:14.319734 env[1211]: time="2025-09-13T00:55:14.319693771Z" level=info msg="CreateContainer within sandbox \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\"" Sep 13 00:55:14.320320 env[1211]: time="2025-09-13T00:55:14.320276196Z" level=info msg="StartContainer for \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\"" Sep 13 00:55:14.335159 systemd[1]: Started cri-containerd-7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3.scope. Sep 13 00:55:14.359804 env[1211]: time="2025-09-13T00:55:14.359761323Z" level=info msg="StartContainer for \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\" returns successfully" Sep 13 00:55:14.431069 kubelet[1909]: I0913 00:55:14.431002 1909 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:55:14.483323 systemd[1]: Created slice kubepods-burstable-pod8e32e1dd_ef85_43e8_8311_13902689ddc0.slice. Sep 13 00:55:14.490918 systemd[1]: Created slice kubepods-burstable-podc1ed0c1d_00c4_4f1e_8e8c_58c946de4b0c.slice. Sep 13 00:55:14.591961 kubelet[1909]: I0913 00:55:14.591844 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq6zq\" (UniqueName: \"kubernetes.io/projected/8e32e1dd-ef85-43e8-8311-13902689ddc0-kube-api-access-tq6zq\") pod \"coredns-668d6bf9bc-vrnp8\" (UID: \"8e32e1dd-ef85-43e8-8311-13902689ddc0\") " pod="kube-system/coredns-668d6bf9bc-vrnp8" Sep 13 00:55:14.591961 kubelet[1909]: I0913 00:55:14.591891 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtpzx\" (UniqueName: \"kubernetes.io/projected/c1ed0c1d-00c4-4f1e-8e8c-58c946de4b0c-kube-api-access-rtpzx\") pod \"coredns-668d6bf9bc-zvfgm\" (UID: \"c1ed0c1d-00c4-4f1e-8e8c-58c946de4b0c\") " pod="kube-system/coredns-668d6bf9bc-zvfgm" Sep 13 00:55:14.591961 kubelet[1909]: I0913 00:55:14.591913 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1ed0c1d-00c4-4f1e-8e8c-58c946de4b0c-config-volume\") pod \"coredns-668d6bf9bc-zvfgm\" (UID: \"c1ed0c1d-00c4-4f1e-8e8c-58c946de4b0c\") " pod="kube-system/coredns-668d6bf9bc-zvfgm" Sep 13 00:55:14.591961 kubelet[1909]: I0913 00:55:14.591936 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e32e1dd-ef85-43e8-8311-13902689ddc0-config-volume\") pod \"coredns-668d6bf9bc-vrnp8\" (UID: \"8e32e1dd-ef85-43e8-8311-13902689ddc0\") " pod="kube-system/coredns-668d6bf9bc-vrnp8" Sep 13 00:55:14.787916 kubelet[1909]: E0913 00:55:14.787856 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:14.788685 env[1211]: time="2025-09-13T00:55:14.788621538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrnp8,Uid:8e32e1dd-ef85-43e8-8311-13902689ddc0,Namespace:kube-system,Attempt:0,}" Sep 13 00:55:14.797353 kubelet[1909]: E0913 00:55:14.797317 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:14.797901 env[1211]: time="2025-09-13T00:55:14.797852377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zvfgm,Uid:c1ed0c1d-00c4-4f1e-8e8c-58c946de4b0c,Namespace:kube-system,Attempt:0,}" Sep 13 00:55:15.307694 kubelet[1909]: E0913 00:55:15.306895 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:15.320859 kubelet[1909]: I0913 00:55:15.320782 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d2m9r" podStartSLOduration=6.623695832 podStartE2EDuration="18.320742725s" podCreationTimestamp="2025-09-13 00:54:57 +0000 UTC" firstStartedPulling="2025-09-13 00:54:58.312630626 +0000 UTC m=+8.188764447" lastFinishedPulling="2025-09-13 00:55:10.009677499 +0000 UTC m=+19.885811340" observedRunningTime="2025-09-13 00:55:15.320561935 +0000 UTC m=+25.196695746" watchObservedRunningTime="2025-09-13 00:55:15.320742725 +0000 UTC m=+25.196876546" Sep 13 00:55:16.298554 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:55:16.298695 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:55:16.295805 systemd-networkd[1038]: cilium_host: Link UP Sep 13 00:55:16.295916 systemd-networkd[1038]: cilium_net: Link UP Sep 13 00:55:16.297447 systemd-networkd[1038]: cilium_net: Gained carrier Sep 13 00:55:16.299342 systemd-networkd[1038]: cilium_host: Gained carrier Sep 13 00:55:16.299463 systemd-networkd[1038]: cilium_net: Gained IPv6LL Sep 13 00:55:16.299630 systemd-networkd[1038]: cilium_host: Gained IPv6LL Sep 13 00:55:16.308540 kubelet[1909]: E0913 00:55:16.308458 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:16.373953 systemd-networkd[1038]: cilium_vxlan: Link UP Sep 13 00:55:16.373959 systemd-networkd[1038]: cilium_vxlan: Gained carrier Sep 13 00:55:16.554692 kernel: NET: Registered PF_ALG protocol family Sep 13 00:55:17.075921 systemd-networkd[1038]: lxc_health: Link UP Sep 13 00:55:17.084693 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:55:17.084906 systemd-networkd[1038]: lxc_health: Gained carrier Sep 13 00:55:17.310879 kubelet[1909]: E0913 00:55:17.310833 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:17.335217 systemd-networkd[1038]: lxcedd075acae5b: Link UP Sep 13 00:55:17.341692 kernel: eth0: renamed from tmp5b2e8 Sep 13 00:55:17.349621 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:55:17.349694 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcedd075acae5b: link becomes ready Sep 13 00:55:17.350564 systemd-networkd[1038]: lxcedd075acae5b: Gained carrier Sep 13 00:55:17.351028 systemd-networkd[1038]: lxc9b13f3262998: Link UP Sep 13 00:55:17.363702 kernel: eth0: renamed from tmp134d9 Sep 13 00:55:17.371764 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9b13f3262998: link becomes ready Sep 13 00:55:17.371559 systemd-networkd[1038]: lxc9b13f3262998: Gained carrier Sep 13 00:55:17.432785 systemd-networkd[1038]: cilium_vxlan: Gained IPv6LL Sep 13 00:55:18.275277 systemd-networkd[1038]: lxc_health: Gained IPv6LL Sep 13 00:55:18.312218 kubelet[1909]: E0913 00:55:18.312168 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:18.456797 systemd-networkd[1038]: lxc9b13f3262998: Gained IPv6LL Sep 13 00:55:18.712803 systemd-networkd[1038]: lxcedd075acae5b: Gained IPv6LL Sep 13 00:55:18.894823 systemd[1]: Started sshd@6-10.0.0.140:22-10.0.0.1:38176.service. Sep 13 00:55:18.930336 sshd[3134]: Accepted publickey for core from 10.0.0.1 port 38176 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:18.931154 sshd[3134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:18.934141 systemd-logind[1197]: New session 7 of user core. Sep 13 00:55:18.934893 systemd[1]: Started session-7.scope. Sep 13 00:55:19.039705 sshd[3134]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:19.042066 systemd[1]: sshd@6-10.0.0.140:22-10.0.0.1:38176.service: Deactivated successfully. Sep 13 00:55:19.042718 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:55:19.043419 systemd-logind[1197]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:55:19.044246 systemd-logind[1197]: Removed session 7. Sep 13 00:55:19.314008 kubelet[1909]: E0913 00:55:19.313983 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:20.316767 kubelet[1909]: E0913 00:55:20.316723 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:20.591203 env[1211]: time="2025-09-13T00:55:20.591059391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:20.591203 env[1211]: time="2025-09-13T00:55:20.591110567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:20.591203 env[1211]: time="2025-09-13T00:55:20.591123723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:20.591649 env[1211]: time="2025-09-13T00:55:20.591273834Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b2e84684c7983557e768a378245f03289ab5471c5f319eb14896c50ff32084c pid=3164 runtime=io.containerd.runc.v2 Sep 13 00:55:20.600424 env[1211]: time="2025-09-13T00:55:20.600338655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:55:20.600505 env[1211]: time="2025-09-13T00:55:20.600419998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:55:20.600505 env[1211]: time="2025-09-13T00:55:20.600459252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:55:20.600761 env[1211]: time="2025-09-13T00:55:20.600723027Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/134d950ed396bac235274762b0ef0828adc0048e7a43cddfca6d150f0500aa3c pid=3187 runtime=io.containerd.runc.v2 Sep 13 00:55:20.610048 systemd[1]: Started cri-containerd-5b2e84684c7983557e768a378245f03289ab5471c5f319eb14896c50ff32084c.scope. Sep 13 00:55:20.620206 systemd[1]: Started cri-containerd-134d950ed396bac235274762b0ef0828adc0048e7a43cddfca6d150f0500aa3c.scope. Sep 13 00:55:20.623698 systemd-resolved[1152]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:55:20.629440 systemd-resolved[1152]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:55:20.645553 env[1211]: time="2025-09-13T00:55:20.645503703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zvfgm,Uid:c1ed0c1d-00c4-4f1e-8e8c-58c946de4b0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b2e84684c7983557e768a378245f03289ab5471c5f319eb14896c50ff32084c\"" Sep 13 00:55:20.646230 kubelet[1909]: E0913 00:55:20.646199 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:20.649711 env[1211]: time="2025-09-13T00:55:20.649669569Z" level=info msg="CreateContainer within sandbox \"5b2e84684c7983557e768a378245f03289ab5471c5f319eb14896c50ff32084c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:55:20.657714 env[1211]: time="2025-09-13T00:55:20.657656245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrnp8,Uid:8e32e1dd-ef85-43e8-8311-13902689ddc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"134d950ed396bac235274762b0ef0828adc0048e7a43cddfca6d150f0500aa3c\"" Sep 13 00:55:20.659635 kubelet[1909]: E0913 00:55:20.659602 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:20.661701 env[1211]: time="2025-09-13T00:55:20.661145730Z" level=info msg="CreateContainer within sandbox \"134d950ed396bac235274762b0ef0828adc0048e7a43cddfca6d150f0500aa3c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:55:20.671251 env[1211]: time="2025-09-13T00:55:20.671201774Z" level=info msg="CreateContainer within sandbox \"5b2e84684c7983557e768a378245f03289ab5471c5f319eb14896c50ff32084c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff7a608165d3e2462692dfe2aed1502a2952a2101bd10c38a1d42ed0795be581\"" Sep 13 00:55:20.671605 env[1211]: time="2025-09-13T00:55:20.671578672Z" level=info msg="StartContainer for \"ff7a608165d3e2462692dfe2aed1502a2952a2101bd10c38a1d42ed0795be581\"" Sep 13 00:55:20.679329 env[1211]: time="2025-09-13T00:55:20.679284290Z" level=info msg="CreateContainer within sandbox \"134d950ed396bac235274762b0ef0828adc0048e7a43cddfca6d150f0500aa3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8baf6510b8210c5be1da58cfa1fced6f2dc09334af35b34cfdc43dd7940708d\"" Sep 13 00:55:20.680991 env[1211]: time="2025-09-13T00:55:20.680942084Z" level=info msg="StartContainer for \"a8baf6510b8210c5be1da58cfa1fced6f2dc09334af35b34cfdc43dd7940708d\"" Sep 13 00:55:20.688229 systemd[1]: Started cri-containerd-ff7a608165d3e2462692dfe2aed1502a2952a2101bd10c38a1d42ed0795be581.scope. Sep 13 00:55:20.700921 systemd[1]: Started cri-containerd-a8baf6510b8210c5be1da58cfa1fced6f2dc09334af35b34cfdc43dd7940708d.scope. Sep 13 00:55:20.713369 env[1211]: time="2025-09-13T00:55:20.713322692Z" level=info msg="StartContainer for \"ff7a608165d3e2462692dfe2aed1502a2952a2101bd10c38a1d42ed0795be581\" returns successfully" Sep 13 00:55:20.725305 env[1211]: time="2025-09-13T00:55:20.725265290Z" level=info msg="StartContainer for \"a8baf6510b8210c5be1da58cfa1fced6f2dc09334af35b34cfdc43dd7940708d\" returns successfully" Sep 13 00:55:21.326257 kubelet[1909]: E0913 00:55:21.326231 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:21.326257 kubelet[1909]: E0913 00:55:21.326243 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:21.348648 kubelet[1909]: I0913 00:55:21.348585 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vrnp8" podStartSLOduration=23.348566396 podStartE2EDuration="23.348566396s" podCreationTimestamp="2025-09-13 00:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:55:21.348469443 +0000 UTC m=+31.224603264" watchObservedRunningTime="2025-09-13 00:55:21.348566396 +0000 UTC m=+31.224700217" Sep 13 00:55:21.348847 kubelet[1909]: I0913 00:55:21.348711 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zvfgm" podStartSLOduration=23.348705156 podStartE2EDuration="23.348705156s" podCreationTimestamp="2025-09-13 00:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:55:21.340957602 +0000 UTC m=+31.217091423" watchObservedRunningTime="2025-09-13 00:55:21.348705156 +0000 UTC m=+31.224838977" Sep 13 00:55:24.043830 systemd[1]: Started sshd@7-10.0.0.140:22-10.0.0.1:38254.service. Sep 13 00:55:24.079402 sshd[3317]: Accepted publickey for core from 10.0.0.1 port 38254 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:24.080425 sshd[3317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:24.084337 systemd-logind[1197]: New session 8 of user core. Sep 13 00:55:24.085466 systemd[1]: Started session-8.scope. Sep 13 00:55:24.198686 sshd[3317]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:24.200919 systemd[1]: sshd@7-10.0.0.140:22-10.0.0.1:38254.service: Deactivated successfully. Sep 13 00:55:24.201573 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:55:24.202334 systemd-logind[1197]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:55:24.202995 systemd-logind[1197]: Removed session 8. Sep 13 00:55:29.203580 systemd[1]: Started sshd@8-10.0.0.140:22-10.0.0.1:38268.service. Sep 13 00:55:29.238742 sshd[3333]: Accepted publickey for core from 10.0.0.1 port 38268 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:29.239852 sshd[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:29.242762 systemd-logind[1197]: New session 9 of user core. Sep 13 00:55:29.243468 systemd[1]: Started session-9.scope. Sep 13 00:55:29.359638 sshd[3333]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:29.361780 systemd[1]: sshd@8-10.0.0.140:22-10.0.0.1:38268.service: Deactivated successfully. Sep 13 00:55:29.362435 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:55:29.362986 systemd-logind[1197]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:55:29.363588 systemd-logind[1197]: Removed session 9. Sep 13 00:55:31.324579 kubelet[1909]: E0913 00:55:31.324546 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:31.324912 kubelet[1909]: E0913 00:55:31.324683 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:31.337568 kubelet[1909]: E0913 00:55:31.337523 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:31.337929 kubelet[1909]: E0913 00:55:31.337895 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:55:34.364173 systemd[1]: Started sshd@9-10.0.0.140:22-10.0.0.1:52916.service. Sep 13 00:55:34.398981 sshd[3355]: Accepted publickey for core from 10.0.0.1 port 52916 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:34.400039 sshd[3355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:34.403148 systemd-logind[1197]: New session 10 of user core. Sep 13 00:55:34.404071 systemd[1]: Started session-10.scope. Sep 13 00:55:34.503776 sshd[3355]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:34.505722 systemd[1]: sshd@9-10.0.0.140:22-10.0.0.1:52916.service: Deactivated successfully. Sep 13 00:55:34.506352 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:55:34.506986 systemd-logind[1197]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:55:34.507559 systemd-logind[1197]: Removed session 10. Sep 13 00:55:39.508213 systemd[1]: Started sshd@10-10.0.0.140:22-10.0.0.1:52922.service. Sep 13 00:55:39.544886 sshd[3369]: Accepted publickey for core from 10.0.0.1 port 52922 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:39.545778 sshd[3369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:39.549030 systemd-logind[1197]: New session 11 of user core. Sep 13 00:55:39.549984 systemd[1]: Started session-11.scope. Sep 13 00:55:39.653691 sshd[3369]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:39.656494 systemd[1]: sshd@10-10.0.0.140:22-10.0.0.1:52922.service: Deactivated successfully. Sep 13 00:55:39.657001 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:55:39.657434 systemd-logind[1197]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:55:39.658342 systemd[1]: Started sshd@11-10.0.0.140:22-10.0.0.1:52938.service. Sep 13 00:55:39.659032 systemd-logind[1197]: Removed session 11. Sep 13 00:55:39.692963 sshd[3383]: Accepted publickey for core from 10.0.0.1 port 52938 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:39.693781 sshd[3383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:39.696786 systemd-logind[1197]: New session 12 of user core. Sep 13 00:55:39.697557 systemd[1]: Started session-12.scope. Sep 13 00:55:39.854400 sshd[3383]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:39.856590 systemd[1]: sshd@11-10.0.0.140:22-10.0.0.1:52938.service: Deactivated successfully. Sep 13 00:55:39.857084 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:55:39.857591 systemd-logind[1197]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:55:39.858519 systemd[1]: Started sshd@12-10.0.0.140:22-10.0.0.1:52954.service. Sep 13 00:55:39.860133 systemd-logind[1197]: Removed session 12. Sep 13 00:55:39.894424 sshd[3395]: Accepted publickey for core from 10.0.0.1 port 52954 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:39.895497 sshd[3395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:39.898585 systemd-logind[1197]: New session 13 of user core. Sep 13 00:55:39.899524 systemd[1]: Started session-13.scope. Sep 13 00:55:39.999831 sshd[3395]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:40.002348 systemd[1]: sshd@12-10.0.0.140:22-10.0.0.1:52954.service: Deactivated successfully. Sep 13 00:55:40.003112 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:55:40.003741 systemd-logind[1197]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:55:40.004485 systemd-logind[1197]: Removed session 13. Sep 13 00:55:45.004247 systemd[1]: Started sshd@13-10.0.0.140:22-10.0.0.1:48996.service. Sep 13 00:55:45.043615 sshd[3408]: Accepted publickey for core from 10.0.0.1 port 48996 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:45.045105 sshd[3408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:45.048786 systemd-logind[1197]: New session 14 of user core. Sep 13 00:55:45.049801 systemd[1]: Started session-14.scope. Sep 13 00:55:45.154015 sshd[3408]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:45.156294 systemd[1]: sshd@13-10.0.0.140:22-10.0.0.1:48996.service: Deactivated successfully. Sep 13 00:55:45.156993 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:55:45.157695 systemd-logind[1197]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:55:45.158313 systemd-logind[1197]: Removed session 14. Sep 13 00:55:50.158054 systemd[1]: Started sshd@14-10.0.0.140:22-10.0.0.1:34868.service. Sep 13 00:55:50.193030 sshd[3422]: Accepted publickey for core from 10.0.0.1 port 34868 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:50.194023 sshd[3422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:50.197931 systemd-logind[1197]: New session 15 of user core. Sep 13 00:55:50.198215 systemd[1]: Started session-15.scope. Sep 13 00:55:50.305342 sshd[3422]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:50.308276 systemd[1]: sshd@14-10.0.0.140:22-10.0.0.1:34868.service: Deactivated successfully. Sep 13 00:55:50.308813 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:55:50.309302 systemd-logind[1197]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:55:50.310335 systemd[1]: Started sshd@15-10.0.0.140:22-10.0.0.1:34870.service. Sep 13 00:55:50.311070 systemd-logind[1197]: Removed session 15. Sep 13 00:55:50.346684 sshd[3437]: Accepted publickey for core from 10.0.0.1 port 34870 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:50.347793 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:50.352245 systemd[1]: Started session-16.scope. Sep 13 00:55:50.352550 systemd-logind[1197]: New session 16 of user core. Sep 13 00:55:50.531850 sshd[3437]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:50.534452 systemd[1]: sshd@15-10.0.0.140:22-10.0.0.1:34870.service: Deactivated successfully. Sep 13 00:55:50.534968 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:55:50.535482 systemd-logind[1197]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:55:50.536493 systemd[1]: Started sshd@16-10.0.0.140:22-10.0.0.1:34872.service. Sep 13 00:55:50.537182 systemd-logind[1197]: Removed session 16. Sep 13 00:55:50.572496 sshd[3448]: Accepted publickey for core from 10.0.0.1 port 34872 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:50.573496 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:50.576390 systemd-logind[1197]: New session 17 of user core. Sep 13 00:55:50.577091 systemd[1]: Started session-17.scope. Sep 13 00:55:51.144623 sshd[3448]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:51.147416 systemd[1]: sshd@16-10.0.0.140:22-10.0.0.1:34872.service: Deactivated successfully. Sep 13 00:55:51.147911 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:55:51.149850 systemd[1]: Started sshd@17-10.0.0.140:22-10.0.0.1:34876.service. Sep 13 00:55:51.150371 systemd-logind[1197]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:55:51.151307 systemd-logind[1197]: Removed session 17. Sep 13 00:55:51.189771 sshd[3468]: Accepted publickey for core from 10.0.0.1 port 34876 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:51.190785 sshd[3468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:51.194180 systemd-logind[1197]: New session 18 of user core. Sep 13 00:55:51.194980 systemd[1]: Started session-18.scope. Sep 13 00:55:51.407333 sshd[3468]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:51.410484 systemd[1]: Started sshd@18-10.0.0.140:22-10.0.0.1:34880.service. Sep 13 00:55:51.410980 systemd[1]: sshd@17-10.0.0.140:22-10.0.0.1:34876.service: Deactivated successfully. Sep 13 00:55:51.411534 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:55:51.412561 systemd-logind[1197]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:55:51.413556 systemd-logind[1197]: Removed session 18. Sep 13 00:55:51.445627 sshd[3479]: Accepted publickey for core from 10.0.0.1 port 34880 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:51.446577 sshd[3479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:51.449417 systemd-logind[1197]: New session 19 of user core. Sep 13 00:55:51.450118 systemd[1]: Started session-19.scope. Sep 13 00:55:51.551102 sshd[3479]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:51.553478 systemd[1]: sshd@18-10.0.0.140:22-10.0.0.1:34880.service: Deactivated successfully. Sep 13 00:55:51.554193 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:55:51.554651 systemd-logind[1197]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:55:51.555366 systemd-logind[1197]: Removed session 19. Sep 13 00:55:56.555331 systemd[1]: Started sshd@19-10.0.0.140:22-10.0.0.1:34890.service. Sep 13 00:55:56.589821 sshd[3494]: Accepted publickey for core from 10.0.0.1 port 34890 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:55:56.590683 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:55:56.593626 systemd-logind[1197]: New session 20 of user core. Sep 13 00:55:56.594343 systemd[1]: Started session-20.scope. Sep 13 00:55:56.696926 sshd[3494]: pam_unix(sshd:session): session closed for user core Sep 13 00:55:56.698977 systemd[1]: sshd@19-10.0.0.140:22-10.0.0.1:34890.service: Deactivated successfully. Sep 13 00:55:56.699620 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:55:56.700335 systemd-logind[1197]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:55:56.700955 systemd-logind[1197]: Removed session 20. Sep 13 00:56:01.701142 systemd[1]: Started sshd@20-10.0.0.140:22-10.0.0.1:52518.service. Sep 13 00:56:01.736613 sshd[3513]: Accepted publickey for core from 10.0.0.1 port 52518 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:56:01.737785 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:01.740703 systemd-logind[1197]: New session 21 of user core. Sep 13 00:56:01.741420 systemd[1]: Started session-21.scope. Sep 13 00:56:01.944474 sshd[3513]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:01.946961 systemd[1]: sshd@20-10.0.0.140:22-10.0.0.1:52518.service: Deactivated successfully. Sep 13 00:56:01.947651 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:56:01.948209 systemd-logind[1197]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:56:01.948834 systemd-logind[1197]: Removed session 21. Sep 13 00:56:06.948356 systemd[1]: Started sshd@21-10.0.0.140:22-10.0.0.1:52532.service. Sep 13 00:56:06.985819 sshd[3526]: Accepted publickey for core from 10.0.0.1 port 52532 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:56:06.986782 sshd[3526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:06.989741 systemd-logind[1197]: New session 22 of user core. Sep 13 00:56:06.990502 systemd[1]: Started session-22.scope. Sep 13 00:56:07.092646 sshd[3526]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:07.095219 systemd[1]: sshd@21-10.0.0.140:22-10.0.0.1:52532.service: Deactivated successfully. Sep 13 00:56:07.095913 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:56:07.096374 systemd-logind[1197]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:56:07.096952 systemd-logind[1197]: Removed session 22. Sep 13 00:56:08.212568 kubelet[1909]: E0913 00:56:08.212536 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:12.096386 systemd[1]: Started sshd@22-10.0.0.140:22-10.0.0.1:35188.service. Sep 13 00:56:12.131438 sshd[3539]: Accepted publickey for core from 10.0.0.1 port 35188 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:56:12.132518 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:12.135411 systemd-logind[1197]: New session 23 of user core. Sep 13 00:56:12.136129 systemd[1]: Started session-23.scope. Sep 13 00:56:12.237008 sshd[3539]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:12.239898 systemd[1]: sshd@22-10.0.0.140:22-10.0.0.1:35188.service: Deactivated successfully. Sep 13 00:56:12.240411 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:56:12.240921 systemd-logind[1197]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:56:12.242201 systemd[1]: Started sshd@23-10.0.0.140:22-10.0.0.1:35194.service. Sep 13 00:56:12.243293 systemd-logind[1197]: Removed session 23. Sep 13 00:56:12.277484 sshd[3552]: Accepted publickey for core from 10.0.0.1 port 35194 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:56:12.278412 sshd[3552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:12.281514 systemd-logind[1197]: New session 24 of user core. Sep 13 00:56:12.282098 systemd[1]: Started session-24.scope. Sep 13 00:56:13.600427 env[1211]: time="2025-09-13T00:56:13.600386155Z" level=info msg="StopContainer for \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\" with timeout 30 (s)" Sep 13 00:56:13.601268 env[1211]: time="2025-09-13T00:56:13.601211045Z" level=info msg="Stop container \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\" with signal terminated" Sep 13 00:56:13.618709 systemd[1]: cri-containerd-bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8.scope: Deactivated successfully. Sep 13 00:56:13.622407 env[1211]: time="2025-09-13T00:56:13.621977937Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:56:13.626795 env[1211]: time="2025-09-13T00:56:13.626761301Z" level=info msg="StopContainer for \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\" with timeout 2 (s)" Sep 13 00:56:13.626966 env[1211]: time="2025-09-13T00:56:13.626937316Z" level=info msg="Stop container \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\" with signal terminated" Sep 13 00:56:13.632413 systemd-networkd[1038]: lxc_health: Link DOWN Sep 13 00:56:13.632422 systemd-networkd[1038]: lxc_health: Lost carrier Sep 13 00:56:13.635528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8-rootfs.mount: Deactivated successfully. Sep 13 00:56:13.640576 env[1211]: time="2025-09-13T00:56:13.640524354Z" level=info msg="shim disconnected" id=bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8 Sep 13 00:56:13.640576 env[1211]: time="2025-09-13T00:56:13.640577756Z" level=warning msg="cleaning up after shim disconnected" id=bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8 namespace=k8s.io Sep 13 00:56:13.640576 env[1211]: time="2025-09-13T00:56:13.640588065Z" level=info msg="cleaning up dead shim" Sep 13 00:56:13.650281 env[1211]: time="2025-09-13T00:56:13.650247269Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:56:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3609 runtime=io.containerd.runc.v2\n" Sep 13 00:56:13.654754 env[1211]: time="2025-09-13T00:56:13.654711945Z" level=info msg="StopContainer for \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\" returns successfully" Sep 13 00:56:13.655457 env[1211]: time="2025-09-13T00:56:13.655417197Z" level=info msg="StopPodSandbox for \"7c09c7dedb553c443414607989bc99c1a621dff2a68de644e2aa091b1bc399c8\"" Sep 13 00:56:13.655516 env[1211]: time="2025-09-13T00:56:13.655484595Z" level=info msg="Container to stop \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:56:13.657248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c09c7dedb553c443414607989bc99c1a621dff2a68de644e2aa091b1bc399c8-shm.mount: Deactivated successfully. Sep 13 00:56:13.662808 systemd[1]: cri-containerd-7c09c7dedb553c443414607989bc99c1a621dff2a68de644e2aa091b1bc399c8.scope: Deactivated successfully. Sep 13 00:56:13.675859 systemd[1]: cri-containerd-7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3.scope: Deactivated successfully. Sep 13 00:56:13.676082 systemd[1]: cri-containerd-7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3.scope: Consumed 5.799s CPU time. Sep 13 00:56:13.684103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c09c7dedb553c443414607989bc99c1a621dff2a68de644e2aa091b1bc399c8-rootfs.mount: Deactivated successfully. Sep 13 00:56:13.690897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3-rootfs.mount: Deactivated successfully. Sep 13 00:56:13.692431 env[1211]: time="2025-09-13T00:56:13.692251959Z" level=info msg="shim disconnected" id=7c09c7dedb553c443414607989bc99c1a621dff2a68de644e2aa091b1bc399c8 Sep 13 00:56:13.692640 env[1211]: time="2025-09-13T00:56:13.692434737Z" level=warning msg="cleaning up after shim disconnected" id=7c09c7dedb553c443414607989bc99c1a621dff2a68de644e2aa091b1bc399c8 namespace=k8s.io Sep 13 00:56:13.692640 env[1211]: time="2025-09-13T00:56:13.692444677Z" level=info msg="cleaning up dead shim" Sep 13 00:56:13.695631 env[1211]: time="2025-09-13T00:56:13.695602446Z" level=info msg="shim disconnected" id=7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3 Sep 13 00:56:13.695710 env[1211]: time="2025-09-13T00:56:13.695631562Z" level=warning msg="cleaning up after shim disconnected" id=7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3 namespace=k8s.io Sep 13 00:56:13.695710 env[1211]: time="2025-09-13T00:56:13.695641981Z" level=info msg="cleaning up dead shim" Sep 13 00:56:13.698909 env[1211]: time="2025-09-13T00:56:13.698880745Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:56:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3655 runtime=io.containerd.runc.v2\n" Sep 13 00:56:13.699253 env[1211]: time="2025-09-13T00:56:13.699228777Z" level=info msg="TearDown network for sandbox \"7c09c7dedb553c443414607989bc99c1a621dff2a68de644e2aa091b1bc399c8\" successfully" Sep 13 00:56:13.699333 env[1211]: time="2025-09-13T00:56:13.699311605Z" level=info msg="StopPodSandbox for \"7c09c7dedb553c443414607989bc99c1a621dff2a68de644e2aa091b1bc399c8\" returns successfully" Sep 13 00:56:13.701988 env[1211]: time="2025-09-13T00:56:13.701950267Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:56:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3664 runtime=io.containerd.runc.v2\n" Sep 13 00:56:13.704454 env[1211]: time="2025-09-13T00:56:13.704420187Z" level=info msg="StopContainer for \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\" returns successfully" Sep 13 00:56:13.704886 env[1211]: time="2025-09-13T00:56:13.704853693Z" level=info msg="StopPodSandbox for \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\"" Sep 13 00:56:13.705025 env[1211]: time="2025-09-13T00:56:13.704906803Z" level=info msg="Container to stop \"a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:56:13.705025 env[1211]: time="2025-09-13T00:56:13.704920019Z" level=info msg="Container to stop \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:56:13.705025 env[1211]: time="2025-09-13T00:56:13.704930429Z" level=info msg="Container to stop \"943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:56:13.705025 env[1211]: time="2025-09-13T00:56:13.704940227Z" level=info msg="Container to stop \"25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:56:13.705025 env[1211]: time="2025-09-13T00:56:13.704949474Z" level=info msg="Container to stop \"a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:56:13.709936 systemd[1]: cri-containerd-5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2.scope: Deactivated successfully. Sep 13 00:56:13.734059 env[1211]: time="2025-09-13T00:56:13.734011785Z" level=info msg="shim disconnected" id=5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2 Sep 13 00:56:13.734059 env[1211]: time="2025-09-13T00:56:13.734054285Z" level=warning msg="cleaning up after shim disconnected" id=5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2 namespace=k8s.io Sep 13 00:56:13.734059 env[1211]: time="2025-09-13T00:56:13.734062632Z" level=info msg="cleaning up dead shim" Sep 13 00:56:13.770860 env[1211]: time="2025-09-13T00:56:13.770771985Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:56:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3699 runtime=io.containerd.runc.v2\n" Sep 13 00:56:13.771093 env[1211]: time="2025-09-13T00:56:13.771058530Z" level=info msg="TearDown network for sandbox \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\" successfully" Sep 13 00:56:13.771093 env[1211]: time="2025-09-13T00:56:13.771085512Z" level=info msg="StopPodSandbox for \"5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2\" returns successfully" Sep 13 00:56:13.809111 kubelet[1909]: I0913 00:56:13.809065 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af42ab2b-6b79-49a4-849d-801bfc59adce-cilium-config-path\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809111 kubelet[1909]: I0913 00:56:13.809097 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-bpf-maps\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809111 kubelet[1909]: I0913 00:56:13.809119 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-hostproc\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809531 kubelet[1909]: I0913 00:56:13.809134 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-host-proc-sys-kernel\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809531 kubelet[1909]: I0913 00:56:13.809149 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-xtables-lock\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809531 kubelet[1909]: I0913 00:56:13.809166 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-cilium-run\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809531 kubelet[1909]: I0913 00:56:13.809182 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9j9j\" (UniqueName: \"kubernetes.io/projected/af42ab2b-6b79-49a4-849d-801bfc59adce-kube-api-access-w9j9j\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809531 kubelet[1909]: I0913 00:56:13.809197 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-etc-cni-netd\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809531 kubelet[1909]: I0913 00:56:13.809210 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-lib-modules\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809695 kubelet[1909]: I0913 00:56:13.809243 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af42ab2b-6b79-49a4-849d-801bfc59adce-clustermesh-secrets\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809695 kubelet[1909]: I0913 00:56:13.809238 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-hostproc" (OuterVolumeSpecName: "hostproc") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:13.809695 kubelet[1909]: I0913 00:56:13.809261 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e-cilium-config-path\") pod \"6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e\" (UID: \"6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e\") " Sep 13 00:56:13.809695 kubelet[1909]: I0913 00:56:13.809243 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:13.809695 kubelet[1909]: I0913 00:56:13.809279 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af42ab2b-6b79-49a4-849d-801bfc59adce-hubble-tls\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809831 kubelet[1909]: I0913 00:56:13.809294 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-cilium-cgroup\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809831 kubelet[1909]: I0913 00:56:13.809305 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:13.809831 kubelet[1909]: I0913 00:56:13.809311 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-host-proc-sys-net\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809831 kubelet[1909]: I0913 00:56:13.809348 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:13.809831 kubelet[1909]: I0913 00:56:13.809362 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-cni-path\") pod \"af42ab2b-6b79-49a4-849d-801bfc59adce\" (UID: \"af42ab2b-6b79-49a4-849d-801bfc59adce\") " Sep 13 00:56:13.809949 kubelet[1909]: I0913 00:56:13.809369 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:13.809949 kubelet[1909]: I0913 00:56:13.809392 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5ck4\" (UniqueName: \"kubernetes.io/projected/6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e-kube-api-access-x5ck4\") pod \"6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e\" (UID: \"6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e\") " Sep 13 00:56:13.809949 kubelet[1909]: I0913 00:56:13.809456 1909 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.809949 kubelet[1909]: I0913 00:56:13.809465 1909 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.809949 kubelet[1909]: I0913 00:56:13.809473 1909 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.809949 kubelet[1909]: I0913 00:56:13.809481 1909 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.809949 kubelet[1909]: I0913 00:56:13.809489 1909 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.810113 kubelet[1909]: I0913 00:56:13.809826 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:13.810113 kubelet[1909]: I0913 00:56:13.809978 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:13.810113 kubelet[1909]: I0913 00:56:13.810006 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:13.810113 kubelet[1909]: I0913 00:56:13.810029 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:13.811129 kubelet[1909]: I0913 00:56:13.811098 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af42ab2b-6b79-49a4-849d-801bfc59adce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:56:13.812953 kubelet[1909]: I0913 00:56:13.812924 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e" (UID: "6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:56:13.813160 kubelet[1909]: I0913 00:56:13.813016 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-cni-path" (OuterVolumeSpecName: "cni-path") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:13.813267 kubelet[1909]: I0913 00:56:13.813195 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af42ab2b-6b79-49a4-849d-801bfc59adce-kube-api-access-w9j9j" (OuterVolumeSpecName: "kube-api-access-w9j9j") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "kube-api-access-w9j9j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:56:13.813594 kubelet[1909]: I0913 00:56:13.813544 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af42ab2b-6b79-49a4-849d-801bfc59adce-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:56:13.814064 kubelet[1909]: I0913 00:56:13.814005 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e-kube-api-access-x5ck4" (OuterVolumeSpecName: "kube-api-access-x5ck4") pod "6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e" (UID: "6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e"). InnerVolumeSpecName "kube-api-access-x5ck4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:56:13.814691 kubelet[1909]: I0913 00:56:13.814649 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af42ab2b-6b79-49a4-849d-801bfc59adce-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "af42ab2b-6b79-49a4-849d-801bfc59adce" (UID: "af42ab2b-6b79-49a4-849d-801bfc59adce"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:56:13.910645 kubelet[1909]: I0913 00:56:13.910574 1909 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x5ck4\" (UniqueName: \"kubernetes.io/projected/6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e-kube-api-access-x5ck4\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.910645 kubelet[1909]: I0913 00:56:13.910596 1909 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.910645 kubelet[1909]: I0913 00:56:13.910605 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af42ab2b-6b79-49a4-849d-801bfc59adce-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.910645 kubelet[1909]: I0913 00:56:13.910614 1909 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.910645 kubelet[1909]: I0913 00:56:13.910621 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.910645 kubelet[1909]: I0913 00:56:13.910629 1909 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w9j9j\" (UniqueName: \"kubernetes.io/projected/af42ab2b-6b79-49a4-849d-801bfc59adce-kube-api-access-w9j9j\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.910645 kubelet[1909]: I0913 00:56:13.910638 1909 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af42ab2b-6b79-49a4-849d-801bfc59adce-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.910645 kubelet[1909]: I0913 00:56:13.910645 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.910885 kubelet[1909]: I0913 00:56:13.910652 1909 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af42ab2b-6b79-49a4-849d-801bfc59adce-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.910885 kubelet[1909]: I0913 00:56:13.910676 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:13.910885 kubelet[1909]: I0913 00:56:13.910685 1909 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af42ab2b-6b79-49a4-849d-801bfc59adce-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:14.218132 systemd[1]: Removed slice kubepods-besteffort-pod6ba12dc0_e9dc_47ea_910a_2801a9cbfe8e.slice. Sep 13 00:56:14.219063 systemd[1]: Removed slice kubepods-burstable-podaf42ab2b_6b79_49a4_849d_801bfc59adce.slice. Sep 13 00:56:14.219132 systemd[1]: kubepods-burstable-podaf42ab2b_6b79_49a4_849d_801bfc59adce.slice: Consumed 5.886s CPU time. Sep 13 00:56:14.410243 kubelet[1909]: I0913 00:56:14.410206 1909 scope.go:117] "RemoveContainer" containerID="bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8" Sep 13 00:56:14.412042 env[1211]: time="2025-09-13T00:56:14.412010439Z" level=info msg="RemoveContainer for \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\"" Sep 13 00:56:14.417302 env[1211]: time="2025-09-13T00:56:14.417245638Z" level=info msg="RemoveContainer for \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\" returns successfully" Sep 13 00:56:14.417561 kubelet[1909]: I0913 00:56:14.417457 1909 scope.go:117] "RemoveContainer" containerID="bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8" Sep 13 00:56:14.417939 env[1211]: time="2025-09-13T00:56:14.417871369Z" level=error msg="ContainerStatus for \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\": not found" Sep 13 00:56:14.418184 kubelet[1909]: E0913 00:56:14.418102 1909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\": not found" containerID="bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8" Sep 13 00:56:14.418987 kubelet[1909]: I0913 00:56:14.418145 1909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8"} err="failed to get container status \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcf60adc9b5ad71dafcfa4d79881e8edff7bf597ca8c9219074b30992030a6d8\": not found" Sep 13 00:56:14.418987 kubelet[1909]: I0913 00:56:14.418729 1909 scope.go:117] "RemoveContainer" containerID="7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3" Sep 13 00:56:14.424094 env[1211]: time="2025-09-13T00:56:14.424066746Z" level=info msg="RemoveContainer for \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\"" Sep 13 00:56:14.427002 env[1211]: time="2025-09-13T00:56:14.426962755Z" level=info msg="RemoveContainer for \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\" returns successfully" Sep 13 00:56:14.427251 kubelet[1909]: I0913 00:56:14.427200 1909 scope.go:117] "RemoveContainer" containerID="a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81" Sep 13 00:56:14.428302 env[1211]: time="2025-09-13T00:56:14.428281123Z" level=info msg="RemoveContainer for \"a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81\"" Sep 13 00:56:14.431092 env[1211]: time="2025-09-13T00:56:14.431062594Z" level=info msg="RemoveContainer for \"a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81\" returns successfully" Sep 13 00:56:14.431267 kubelet[1909]: I0913 00:56:14.431232 1909 scope.go:117] "RemoveContainer" containerID="a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84" Sep 13 00:56:14.432385 env[1211]: time="2025-09-13T00:56:14.432346598Z" level=info msg="RemoveContainer for \"a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84\"" Sep 13 00:56:14.435141 env[1211]: time="2025-09-13T00:56:14.435110757Z" level=info msg="RemoveContainer for \"a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84\" returns successfully" Sep 13 00:56:14.435270 kubelet[1909]: I0913 00:56:14.435250 1909 scope.go:117] "RemoveContainer" containerID="25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724" Sep 13 00:56:14.436140 env[1211]: time="2025-09-13T00:56:14.436109867Z" level=info msg="RemoveContainer for \"25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724\"" Sep 13 00:56:14.438769 env[1211]: time="2025-09-13T00:56:14.438739520Z" level=info msg="RemoveContainer for \"25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724\" returns successfully" Sep 13 00:56:14.438928 kubelet[1909]: I0913 00:56:14.438898 1909 scope.go:117] "RemoveContainer" containerID="943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff" Sep 13 00:56:14.439837 env[1211]: time="2025-09-13T00:56:14.439814916Z" level=info msg="RemoveContainer for \"943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff\"" Sep 13 00:56:14.442343 env[1211]: time="2025-09-13T00:56:14.442313730Z" level=info msg="RemoveContainer for \"943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff\" returns successfully" Sep 13 00:56:14.442485 kubelet[1909]: I0913 00:56:14.442449 1909 scope.go:117] "RemoveContainer" containerID="7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3" Sep 13 00:56:14.442728 env[1211]: time="2025-09-13T00:56:14.442675608Z" level=error msg="ContainerStatus for \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\": not found" Sep 13 00:56:14.442859 kubelet[1909]: E0913 00:56:14.442825 1909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\": not found" containerID="7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3" Sep 13 00:56:14.442924 kubelet[1909]: I0913 00:56:14.442853 1909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3"} err="failed to get container status \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c03d187ee94e825022af69d5f3d7929c9b8a4009e43ec4373d9e0fe8866f1c3\": not found" Sep 13 00:56:14.442924 kubelet[1909]: I0913 00:56:14.442873 1909 scope.go:117] "RemoveContainer" containerID="a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81" Sep 13 00:56:14.443061 env[1211]: time="2025-09-13T00:56:14.443015936Z" level=error msg="ContainerStatus for \"a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81\": not found" Sep 13 00:56:14.443158 kubelet[1909]: E0913 00:56:14.443139 1909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81\": not found" containerID="a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81" Sep 13 00:56:14.443191 kubelet[1909]: I0913 00:56:14.443158 1909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81"} err="failed to get container status \"a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81\": rpc error: code = NotFound desc = an error occurred when try to find container \"a67a4527b420a059190c14541745e7fb74b6c76a48214d9b56e85ce2bc16fb81\": not found" Sep 13 00:56:14.443191 kubelet[1909]: I0913 00:56:14.443170 1909 scope.go:117] "RemoveContainer" containerID="a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84" Sep 13 00:56:14.443380 env[1211]: time="2025-09-13T00:56:14.443310396Z" level=error msg="ContainerStatus for \"a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84\": not found" Sep 13 00:56:14.443500 kubelet[1909]: E0913 00:56:14.443469 1909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84\": not found" containerID="a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84" Sep 13 00:56:14.443584 kubelet[1909]: I0913 00:56:14.443497 1909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84"} err="failed to get container status \"a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84\": rpc error: code = NotFound desc = an error occurred when try to find container \"a508569af6f7c86897f73965ea6934594a33ed9063e889600f13eb0fdb6bdc84\": not found" Sep 13 00:56:14.443584 kubelet[1909]: I0913 00:56:14.443518 1909 scope.go:117] "RemoveContainer" containerID="25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724" Sep 13 00:56:14.443728 env[1211]: time="2025-09-13T00:56:14.443673537Z" level=error msg="ContainerStatus for \"25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724\": not found" Sep 13 00:56:14.443825 kubelet[1909]: E0913 00:56:14.443790 1909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724\": not found" containerID="25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724" Sep 13 00:56:14.443913 kubelet[1909]: I0913 00:56:14.443844 1909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724"} err="failed to get container status \"25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724\": rpc error: code = NotFound desc = an error occurred when try to find container \"25dcea121271c297e11f7ca4d3abf1b9399d46dd737a1c19265546c61bca1724\": not found" Sep 13 00:56:14.443913 kubelet[1909]: I0913 00:56:14.443859 1909 scope.go:117] "RemoveContainer" containerID="943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff" Sep 13 00:56:14.444068 env[1211]: time="2025-09-13T00:56:14.444013424Z" level=error msg="ContainerStatus for \"943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff\": not found" Sep 13 00:56:14.444211 kubelet[1909]: E0913 00:56:14.444176 1909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff\": not found" containerID="943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff" Sep 13 00:56:14.444252 kubelet[1909]: I0913 00:56:14.444210 1909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff"} err="failed to get container status \"943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff\": rpc error: code = NotFound desc = an error occurred when try to find container \"943608810534683f772b9764f3cf0ef13871cf86ea69fdb8794b24f1424f1bff\": not found" Sep 13 00:56:14.607824 systemd[1]: var-lib-kubelet-pods-6ba12dc0\x2de9dc\x2d47ea\x2d910a\x2d2801a9cbfe8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx5ck4.mount: Deactivated successfully. Sep 13 00:56:14.607917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2-rootfs.mount: Deactivated successfully. Sep 13 00:56:14.607966 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5395e522120f58f31290cb77857dbc9feaa9f39c9da56b2eafc586f950800ec2-shm.mount: Deactivated successfully. Sep 13 00:56:14.608016 systemd[1]: var-lib-kubelet-pods-af42ab2b\x2d6b79\x2d49a4\x2d849d\x2d801bfc59adce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw9j9j.mount: Deactivated successfully. Sep 13 00:56:14.608074 systemd[1]: var-lib-kubelet-pods-af42ab2b\x2d6b79\x2d49a4\x2d849d\x2d801bfc59adce-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:56:14.608129 systemd[1]: var-lib-kubelet-pods-af42ab2b\x2d6b79\x2d49a4\x2d849d\x2d801bfc59adce-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:56:15.263683 kubelet[1909]: E0913 00:56:15.263633 1909 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:56:15.569410 sshd[3552]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:15.572063 systemd[1]: sshd@23-10.0.0.140:22-10.0.0.1:35194.service: Deactivated successfully. Sep 13 00:56:15.572564 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:56:15.573055 systemd-logind[1197]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:56:15.574048 systemd[1]: Started sshd@24-10.0.0.140:22-10.0.0.1:35202.service. Sep 13 00:56:15.574823 systemd-logind[1197]: Removed session 24. Sep 13 00:56:15.611445 sshd[3721]: Accepted publickey for core from 10.0.0.1 port 35202 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:56:15.612471 sshd[3721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:15.615609 systemd-logind[1197]: New session 25 of user core. Sep 13 00:56:15.616397 systemd[1]: Started session-25.scope. Sep 13 00:56:16.203445 sshd[3721]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:16.204632 systemd[1]: Started sshd@25-10.0.0.140:22-10.0.0.1:35206.service. Sep 13 00:56:16.208393 systemd-logind[1197]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:56:16.209854 systemd[1]: sshd@24-10.0.0.140:22-10.0.0.1:35202.service: Deactivated successfully. Sep 13 00:56:16.210400 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:56:16.211641 systemd-logind[1197]: Removed session 25. Sep 13 00:56:16.213654 kubelet[1909]: E0913 00:56:16.213629 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:16.215784 kubelet[1909]: I0913 00:56:16.215157 1909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e" path="/var/lib/kubelet/pods/6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e/volumes" Sep 13 00:56:16.215784 kubelet[1909]: I0913 00:56:16.215516 1909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af42ab2b-6b79-49a4-849d-801bfc59adce" path="/var/lib/kubelet/pods/af42ab2b-6b79-49a4-849d-801bfc59adce/volumes" Sep 13 00:56:16.218405 kubelet[1909]: I0913 00:56:16.218373 1909 memory_manager.go:355] "RemoveStaleState removing state" podUID="6ba12dc0-e9dc-47ea-910a-2801a9cbfe8e" containerName="cilium-operator" Sep 13 00:56:16.218405 kubelet[1909]: I0913 00:56:16.218390 1909 memory_manager.go:355] "RemoveStaleState removing state" podUID="af42ab2b-6b79-49a4-849d-801bfc59adce" containerName="cilium-agent" Sep 13 00:56:16.223161 systemd[1]: Created slice kubepods-burstable-pod4b69dde3_467a_432b_b9a6_1e2fb8e69285.slice. Sep 13 00:56:16.244617 sshd[3732]: Accepted publickey for core from 10.0.0.1 port 35206 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:56:16.245026 sshd[3732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:16.249299 systemd[1]: Started session-26.scope. Sep 13 00:56:16.249556 systemd-logind[1197]: New session 26 of user core. Sep 13 00:56:16.323877 kubelet[1909]: I0913 00:56:16.323839 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-xtables-lock\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.323877 kubelet[1909]: I0913 00:56:16.323872 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-bpf-maps\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.324182 kubelet[1909]: I0913 00:56:16.323890 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b69dde3-467a-432b-b9a6-1e2fb8e69285-hubble-tls\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.324182 kubelet[1909]: I0913 00:56:16.323916 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9v82\" (UniqueName: \"kubernetes.io/projected/4b69dde3-467a-432b-b9a6-1e2fb8e69285-kube-api-access-p9v82\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.324182 kubelet[1909]: I0913 00:56:16.323932 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-host-proc-sys-net\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.324182 kubelet[1909]: I0913 00:56:16.323946 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-host-proc-sys-kernel\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.324182 kubelet[1909]: I0913 00:56:16.323958 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b69dde3-467a-432b-b9a6-1e2fb8e69285-clustermesh-secrets\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.324307 kubelet[1909]: I0913 00:56:16.323972 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-run\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.324307 kubelet[1909]: I0913 00:56:16.323984 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cni-path\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.324307 kubelet[1909]: I0913 00:56:16.323996 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-etc-cni-netd\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.324307 kubelet[1909]: I0913 00:56:16.324010 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-lib-modules\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.324307 kubelet[1909]: I0913 00:56:16.324025 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-hostproc\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.324307 kubelet[1909]: I0913 00:56:16.324037 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-config-path\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.324434 kubelet[1909]: I0913 00:56:16.324051 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-ipsec-secrets\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.324434 kubelet[1909]: I0913 00:56:16.324063 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-cgroup\") pod \"cilium-z67jc\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " pod="kube-system/cilium-z67jc" Sep 13 00:56:16.365855 sshd[3732]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:16.369567 systemd[1]: Started sshd@26-10.0.0.140:22-10.0.0.1:35208.service. Sep 13 00:56:16.369959 systemd[1]: sshd@25-10.0.0.140:22-10.0.0.1:35206.service: Deactivated successfully. Sep 13 00:56:16.371387 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:56:16.372362 systemd-logind[1197]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:56:16.373145 systemd-logind[1197]: Removed session 26. Sep 13 00:56:16.378040 kubelet[1909]: E0913 00:56:16.377978 1909 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-p9v82 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-z67jc" podUID="4b69dde3-467a-432b-b9a6-1e2fb8e69285" Sep 13 00:56:16.406337 sshd[3746]: Accepted publickey for core from 10.0.0.1 port 35208 ssh2: RSA SHA256:rmcm+o+TNpmszbEi1IM4jaR3PBT1fuhzI0NJEP8YsaM Sep 13 00:56:16.407357 sshd[3746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:16.410640 systemd-logind[1197]: New session 27 of user core. Sep 13 00:56:16.411414 systemd[1]: Started session-27.scope. Sep 13 00:56:16.525448 kubelet[1909]: I0913 00:56:16.525333 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-run\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525448 kubelet[1909]: I0913 00:56:16.525373 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cni-path\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525448 kubelet[1909]: I0913 00:56:16.525392 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-hostproc\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525448 kubelet[1909]: I0913 00:56:16.525415 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b69dde3-467a-432b-b9a6-1e2fb8e69285-hubble-tls\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525448 kubelet[1909]: I0913 00:56:16.525429 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-lib-modules\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525448 kubelet[1909]: I0913 00:56:16.525449 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-cgroup\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525732 kubelet[1909]: I0913 00:56:16.525463 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-host-proc-sys-net\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525732 kubelet[1909]: I0913 00:56:16.525480 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b69dde3-467a-432b-b9a6-1e2fb8e69285-clustermesh-secrets\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525732 kubelet[1909]: I0913 00:56:16.525463 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:16.525732 kubelet[1909]: I0913 00:56:16.525497 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-host-proc-sys-kernel\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525732 kubelet[1909]: I0913 00:56:16.525520 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-xtables-lock\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525858 kubelet[1909]: I0913 00:56:16.525520 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:16.525858 kubelet[1909]: I0913 00:56:16.525538 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cni-path" (OuterVolumeSpecName: "cni-path") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:16.525858 kubelet[1909]: I0913 00:56:16.525535 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9v82\" (UniqueName: \"kubernetes.io/projected/4b69dde3-467a-432b-b9a6-1e2fb8e69285-kube-api-access-p9v82\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525858 kubelet[1909]: I0913 00:56:16.525578 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:16.525858 kubelet[1909]: I0913 00:56:16.525585 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-bpf-maps\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525987 kubelet[1909]: I0913 00:56:16.525613 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-ipsec-secrets\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525987 kubelet[1909]: I0913 00:56:16.525617 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:16.525987 kubelet[1909]: I0913 00:56:16.525632 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-etc-cni-netd\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525987 kubelet[1909]: I0913 00:56:16.525652 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-config-path\") pod \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\" (UID: \"4b69dde3-467a-432b-b9a6-1e2fb8e69285\") " Sep 13 00:56:16.525987 kubelet[1909]: I0913 00:56:16.525710 1909 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.525987 kubelet[1909]: I0913 00:56:16.525718 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.526129 kubelet[1909]: I0913 00:56:16.525726 1909 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.526129 kubelet[1909]: I0913 00:56:16.525735 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.526129 kubelet[1909]: I0913 00:56:16.525742 1909 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.526129 kubelet[1909]: I0913 00:56:16.525899 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:16.526129 kubelet[1909]: I0913 00:56:16.525923 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:16.526256 kubelet[1909]: I0913 00:56:16.526146 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:16.526256 kubelet[1909]: I0913 00:56:16.526165 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-hostproc" (OuterVolumeSpecName: "hostproc") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:16.526634 kubelet[1909]: I0913 00:56:16.526618 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:56:16.528586 kubelet[1909]: I0913 00:56:16.528558 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:56:16.529422 systemd[1]: var-lib-kubelet-pods-4b69dde3\x2d467a\x2d432b\x2db9a6\x2d1e2fb8e69285-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:56:16.530577 kubelet[1909]: I0913 00:56:16.530558 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b69dde3-467a-432b-b9a6-1e2fb8e69285-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:56:16.531299 systemd[1]: var-lib-kubelet-pods-4b69dde3\x2d467a\x2d432b\x2db9a6\x2d1e2fb8e69285-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:56:16.531451 kubelet[1909]: I0913 00:56:16.531432 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:56:16.531836 kubelet[1909]: I0913 00:56:16.531819 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b69dde3-467a-432b-b9a6-1e2fb8e69285-kube-api-access-p9v82" (OuterVolumeSpecName: "kube-api-access-p9v82") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "kube-api-access-p9v82". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:56:16.531924 kubelet[1909]: I0913 00:56:16.531853 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b69dde3-467a-432b-b9a6-1e2fb8e69285-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4b69dde3-467a-432b-b9a6-1e2fb8e69285" (UID: "4b69dde3-467a-432b-b9a6-1e2fb8e69285"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:56:16.626304 kubelet[1909]: I0913 00:56:16.626251 1909 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b69dde3-467a-432b-b9a6-1e2fb8e69285-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.626304 kubelet[1909]: I0913 00:56:16.626278 1909 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.626304 kubelet[1909]: I0913 00:56:16.626285 1909 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.626304 kubelet[1909]: I0913 00:56:16.626292 1909 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p9v82\" (UniqueName: \"kubernetes.io/projected/4b69dde3-467a-432b-b9a6-1e2fb8e69285-kube-api-access-p9v82\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.626304 kubelet[1909]: I0913 00:56:16.626300 1909 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.626304 kubelet[1909]: I0913 00:56:16.626308 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.626304 kubelet[1909]: I0913 00:56:16.626314 1909 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.626304 kubelet[1909]: I0913 00:56:16.626320 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b69dde3-467a-432b-b9a6-1e2fb8e69285-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.626657 kubelet[1909]: I0913 00:56:16.626327 1909 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b69dde3-467a-432b-b9a6-1e2fb8e69285-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:16.626657 kubelet[1909]: I0913 00:56:16.626335 1909 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b69dde3-467a-432b-b9a6-1e2fb8e69285-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:56:17.423900 systemd[1]: Removed slice kubepods-burstable-pod4b69dde3_467a_432b_b9a6_1e2fb8e69285.slice. Sep 13 00:56:17.428783 systemd[1]: var-lib-kubelet-pods-4b69dde3\x2d467a\x2d432b\x2db9a6\x2d1e2fb8e69285-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp9v82.mount: Deactivated successfully. Sep 13 00:56:17.428879 systemd[1]: var-lib-kubelet-pods-4b69dde3\x2d467a\x2d432b\x2db9a6\x2d1e2fb8e69285-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:56:17.455844 systemd[1]: Created slice kubepods-burstable-pod9c5907d7_ce98_49dd_9936_245cde00acc3.slice. Sep 13 00:56:17.531812 kubelet[1909]: I0913 00:56:17.531771 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c5907d7-ce98-49dd-9936-245cde00acc3-lib-modules\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.531812 kubelet[1909]: I0913 00:56:17.531802 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c5907d7-ce98-49dd-9936-245cde00acc3-xtables-lock\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.531812 kubelet[1909]: I0913 00:56:17.531820 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9c5907d7-ce98-49dd-9936-245cde00acc3-clustermesh-secrets\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.532268 kubelet[1909]: I0913 00:56:17.531839 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9c5907d7-ce98-49dd-9936-245cde00acc3-hostproc\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.532268 kubelet[1909]: I0913 00:56:17.531856 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9c5907d7-ce98-49dd-9936-245cde00acc3-cilium-ipsec-secrets\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.532268 kubelet[1909]: I0913 00:56:17.531900 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9c5907d7-ce98-49dd-9936-245cde00acc3-hubble-tls\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.532268 kubelet[1909]: I0913 00:56:17.531943 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9c5907d7-ce98-49dd-9936-245cde00acc3-cilium-cgroup\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.532268 kubelet[1909]: I0913 00:56:17.531976 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9c5907d7-ce98-49dd-9936-245cde00acc3-host-proc-sys-net\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.532268 kubelet[1909]: I0913 00:56:17.532041 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vhq7\" (UniqueName: \"kubernetes.io/projected/9c5907d7-ce98-49dd-9936-245cde00acc3-kube-api-access-7vhq7\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.532405 kubelet[1909]: I0913 00:56:17.532097 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c5907d7-ce98-49dd-9936-245cde00acc3-etc-cni-netd\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.532405 kubelet[1909]: I0913 00:56:17.532112 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9c5907d7-ce98-49dd-9936-245cde00acc3-host-proc-sys-kernel\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.532405 kubelet[1909]: I0913 00:56:17.532131 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9c5907d7-ce98-49dd-9936-245cde00acc3-bpf-maps\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.532405 kubelet[1909]: I0913 00:56:17.532147 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9c5907d7-ce98-49dd-9936-245cde00acc3-cni-path\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.532405 kubelet[1909]: I0913 00:56:17.532163 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c5907d7-ce98-49dd-9936-245cde00acc3-cilium-config-path\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.532405 kubelet[1909]: I0913 00:56:17.532179 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9c5907d7-ce98-49dd-9936-245cde00acc3-cilium-run\") pod \"cilium-kt9qw\" (UID: \"9c5907d7-ce98-49dd-9936-245cde00acc3\") " pod="kube-system/cilium-kt9qw" Sep 13 00:56:17.759128 kubelet[1909]: E0913 00:56:17.758519 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:17.759319 env[1211]: time="2025-09-13T00:56:17.758981399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kt9qw,Uid:9c5907d7-ce98-49dd-9936-245cde00acc3,Namespace:kube-system,Attempt:0,}" Sep 13 00:56:17.771110 env[1211]: time="2025-09-13T00:56:17.771052962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:56:17.771110 env[1211]: time="2025-09-13T00:56:17.771096124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:56:17.771110 env[1211]: time="2025-09-13T00:56:17.771106593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:56:17.771304 env[1211]: time="2025-09-13T00:56:17.771246109Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/753371eaf1bd7a04f0763417c3c339fb0798ec859c6005797cbba5bbbf4c096f pid=3776 runtime=io.containerd.runc.v2 Sep 13 00:56:17.780233 systemd[1]: Started cri-containerd-753371eaf1bd7a04f0763417c3c339fb0798ec859c6005797cbba5bbbf4c096f.scope. Sep 13 00:56:17.798695 env[1211]: time="2025-09-13T00:56:17.798634953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kt9qw,Uid:9c5907d7-ce98-49dd-9936-245cde00acc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"753371eaf1bd7a04f0763417c3c339fb0798ec859c6005797cbba5bbbf4c096f\"" Sep 13 00:56:17.799538 kubelet[1909]: E0913 00:56:17.799511 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:17.802756 env[1211]: time="2025-09-13T00:56:17.802733090Z" level=info msg="CreateContainer within sandbox \"753371eaf1bd7a04f0763417c3c339fb0798ec859c6005797cbba5bbbf4c096f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:56:17.813766 env[1211]: time="2025-09-13T00:56:17.813731043Z" level=info msg="CreateContainer within sandbox \"753371eaf1bd7a04f0763417c3c339fb0798ec859c6005797cbba5bbbf4c096f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"157ccde6a1d712e0fac351cf0e079772d932dd2b580076f261f71137436bbf2a\"" Sep 13 00:56:17.814602 env[1211]: time="2025-09-13T00:56:17.814562984Z" level=info msg="StartContainer for \"157ccde6a1d712e0fac351cf0e079772d932dd2b580076f261f71137436bbf2a\"" Sep 13 00:56:17.827833 systemd[1]: Started cri-containerd-157ccde6a1d712e0fac351cf0e079772d932dd2b580076f261f71137436bbf2a.scope. Sep 13 00:56:17.850428 env[1211]: time="2025-09-13T00:56:17.850388021Z" level=info msg="StartContainer for \"157ccde6a1d712e0fac351cf0e079772d932dd2b580076f261f71137436bbf2a\" returns successfully" Sep 13 00:56:17.855697 systemd[1]: cri-containerd-157ccde6a1d712e0fac351cf0e079772d932dd2b580076f261f71137436bbf2a.scope: Deactivated successfully. Sep 13 00:56:17.882221 env[1211]: time="2025-09-13T00:56:17.882169573Z" level=info msg="shim disconnected" id=157ccde6a1d712e0fac351cf0e079772d932dd2b580076f261f71137436bbf2a Sep 13 00:56:17.882221 env[1211]: time="2025-09-13T00:56:17.882219197Z" level=warning msg="cleaning up after shim disconnected" id=157ccde6a1d712e0fac351cf0e079772d932dd2b580076f261f71137436bbf2a namespace=k8s.io Sep 13 00:56:17.882221 env[1211]: time="2025-09-13T00:56:17.882238113Z" level=info msg="cleaning up dead shim" Sep 13 00:56:17.888714 env[1211]: time="2025-09-13T00:56:17.888682221Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:56:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3861 runtime=io.containerd.runc.v2\n" Sep 13 00:56:18.214558 kubelet[1909]: I0913 00:56:18.214510 1909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b69dde3-467a-432b-b9a6-1e2fb8e69285" path="/var/lib/kubelet/pods/4b69dde3-467a-432b-b9a6-1e2fb8e69285/volumes" Sep 13 00:56:18.424088 kubelet[1909]: E0913 00:56:18.424055 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:18.425966 env[1211]: time="2025-09-13T00:56:18.425924195Z" level=info msg="CreateContainer within sandbox \"753371eaf1bd7a04f0763417c3c339fb0798ec859c6005797cbba5bbbf4c096f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:56:18.440806 env[1211]: time="2025-09-13T00:56:18.440768889Z" level=info msg="CreateContainer within sandbox \"753371eaf1bd7a04f0763417c3c339fb0798ec859c6005797cbba5bbbf4c096f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"044e10b5e67675ce4503724b20ca8105af89dfbcda30f4e1ceec1ababa82cf60\"" Sep 13 00:56:18.441273 env[1211]: time="2025-09-13T00:56:18.441220948Z" level=info msg="StartContainer for \"044e10b5e67675ce4503724b20ca8105af89dfbcda30f4e1ceec1ababa82cf60\"" Sep 13 00:56:18.455579 systemd[1]: Started cri-containerd-044e10b5e67675ce4503724b20ca8105af89dfbcda30f4e1ceec1ababa82cf60.scope. Sep 13 00:56:18.473880 env[1211]: time="2025-09-13T00:56:18.473811728Z" level=info msg="StartContainer for \"044e10b5e67675ce4503724b20ca8105af89dfbcda30f4e1ceec1ababa82cf60\" returns successfully" Sep 13 00:56:18.478628 systemd[1]: cri-containerd-044e10b5e67675ce4503724b20ca8105af89dfbcda30f4e1ceec1ababa82cf60.scope: Deactivated successfully. Sep 13 00:56:18.497187 env[1211]: time="2025-09-13T00:56:18.497140061Z" level=info msg="shim disconnected" id=044e10b5e67675ce4503724b20ca8105af89dfbcda30f4e1ceec1ababa82cf60 Sep 13 00:56:18.497187 env[1211]: time="2025-09-13T00:56:18.497183974Z" level=warning msg="cleaning up after shim disconnected" id=044e10b5e67675ce4503724b20ca8105af89dfbcda30f4e1ceec1ababa82cf60 namespace=k8s.io Sep 13 00:56:18.497390 env[1211]: time="2025-09-13T00:56:18.497193131Z" level=info msg="cleaning up dead shim" Sep 13 00:56:18.504013 env[1211]: time="2025-09-13T00:56:18.503957924Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:56:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3924 runtime=io.containerd.runc.v2\n" Sep 13 00:56:19.426656 kubelet[1909]: E0913 00:56:19.426623 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:19.428970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-044e10b5e67675ce4503724b20ca8105af89dfbcda30f4e1ceec1ababa82cf60-rootfs.mount: Deactivated successfully. Sep 13 00:56:19.431326 env[1211]: time="2025-09-13T00:56:19.431289816Z" level=info msg="CreateContainer within sandbox \"753371eaf1bd7a04f0763417c3c339fb0798ec859c6005797cbba5bbbf4c096f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:56:19.444140 env[1211]: time="2025-09-13T00:56:19.444077365Z" level=info msg="CreateContainer within sandbox \"753371eaf1bd7a04f0763417c3c339fb0798ec859c6005797cbba5bbbf4c096f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"06f328b898bb630e2198b510ced883e937d4f72957873e50740c1328867d3da7\"" Sep 13 00:56:19.444636 env[1211]: time="2025-09-13T00:56:19.444607311Z" level=info msg="StartContainer for \"06f328b898bb630e2198b510ced883e937d4f72957873e50740c1328867d3da7\"" Sep 13 00:56:19.460473 systemd[1]: Started cri-containerd-06f328b898bb630e2198b510ced883e937d4f72957873e50740c1328867d3da7.scope. Sep 13 00:56:19.482123 env[1211]: time="2025-09-13T00:56:19.481675577Z" level=info msg="StartContainer for \"06f328b898bb630e2198b510ced883e937d4f72957873e50740c1328867d3da7\" returns successfully" Sep 13 00:56:19.483103 systemd[1]: cri-containerd-06f328b898bb630e2198b510ced883e937d4f72957873e50740c1328867d3da7.scope: Deactivated successfully. Sep 13 00:56:19.502831 env[1211]: time="2025-09-13T00:56:19.502787997Z" level=info msg="shim disconnected" id=06f328b898bb630e2198b510ced883e937d4f72957873e50740c1328867d3da7 Sep 13 00:56:19.502961 env[1211]: time="2025-09-13T00:56:19.502831179Z" level=warning msg="cleaning up after shim disconnected" id=06f328b898bb630e2198b510ced883e937d4f72957873e50740c1328867d3da7 namespace=k8s.io Sep 13 00:56:19.502961 env[1211]: time="2025-09-13T00:56:19.502839946Z" level=info msg="cleaning up dead shim" Sep 13 00:56:19.508294 env[1211]: time="2025-09-13T00:56:19.508262776Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:56:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3981 runtime=io.containerd.runc.v2\n" Sep 13 00:56:20.265068 kubelet[1909]: E0913 00:56:20.265007 1909 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:56:20.429100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06f328b898bb630e2198b510ced883e937d4f72957873e50740c1328867d3da7-rootfs.mount: Deactivated successfully. Sep 13 00:56:20.429882 kubelet[1909]: E0913 00:56:20.429524 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:20.431856 env[1211]: time="2025-09-13T00:56:20.431033815Z" level=info msg="CreateContainer within sandbox \"753371eaf1bd7a04f0763417c3c339fb0798ec859c6005797cbba5bbbf4c096f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:56:20.442014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131087530.mount: Deactivated successfully. Sep 13 00:56:20.447433 env[1211]: time="2025-09-13T00:56:20.447388960Z" level=info msg="CreateContainer within sandbox \"753371eaf1bd7a04f0763417c3c339fb0798ec859c6005797cbba5bbbf4c096f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6a08ec3fbae51654ca2d0678279bcb8bcdb623758fe0ce81699143d06d8305f5\"" Sep 13 00:56:20.447945 env[1211]: time="2025-09-13T00:56:20.447897366Z" level=info msg="StartContainer for \"6a08ec3fbae51654ca2d0678279bcb8bcdb623758fe0ce81699143d06d8305f5\"" Sep 13 00:56:20.460697 systemd[1]: Started cri-containerd-6a08ec3fbae51654ca2d0678279bcb8bcdb623758fe0ce81699143d06d8305f5.scope. Sep 13 00:56:20.478566 systemd[1]: cri-containerd-6a08ec3fbae51654ca2d0678279bcb8bcdb623758fe0ce81699143d06d8305f5.scope: Deactivated successfully. Sep 13 00:56:20.480522 env[1211]: time="2025-09-13T00:56:20.480486205Z" level=info msg="StartContainer for \"6a08ec3fbae51654ca2d0678279bcb8bcdb623758fe0ce81699143d06d8305f5\" returns successfully" Sep 13 00:56:20.499625 env[1211]: time="2025-09-13T00:56:20.499562376Z" level=info msg="shim disconnected" id=6a08ec3fbae51654ca2d0678279bcb8bcdb623758fe0ce81699143d06d8305f5 Sep 13 00:56:20.499625 env[1211]: time="2025-09-13T00:56:20.499611620Z" level=warning msg="cleaning up after shim disconnected" id=6a08ec3fbae51654ca2d0678279bcb8bcdb623758fe0ce81699143d06d8305f5 namespace=k8s.io Sep 13 00:56:20.499625 env[1211]: time="2025-09-13T00:56:20.499620817Z" level=info msg="cleaning up dead shim" Sep 13 00:56:20.505384 env[1211]: time="2025-09-13T00:56:20.505356067Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:56:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4035 runtime=io.containerd.runc.v2\n" Sep 13 00:56:21.429174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a08ec3fbae51654ca2d0678279bcb8bcdb623758fe0ce81699143d06d8305f5-rootfs.mount: Deactivated successfully. Sep 13 00:56:21.434105 kubelet[1909]: E0913 00:56:21.434080 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:21.435831 env[1211]: time="2025-09-13T00:56:21.435788252Z" level=info msg="CreateContainer within sandbox \"753371eaf1bd7a04f0763417c3c339fb0798ec859c6005797cbba5bbbf4c096f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:56:21.448730 env[1211]: time="2025-09-13T00:56:21.448657201Z" level=info msg="CreateContainer within sandbox \"753371eaf1bd7a04f0763417c3c339fb0798ec859c6005797cbba5bbbf4c096f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6610111a461d0ce1c63321abf9ce630b64e64aa494f6aa7311460620017164e0\"" Sep 13 00:56:21.449182 env[1211]: time="2025-09-13T00:56:21.449158713Z" level=info msg="StartContainer for \"6610111a461d0ce1c63321abf9ce630b64e64aa494f6aa7311460620017164e0\"" Sep 13 00:56:21.465020 systemd[1]: Started cri-containerd-6610111a461d0ce1c63321abf9ce630b64e64aa494f6aa7311460620017164e0.scope. Sep 13 00:56:21.488604 env[1211]: time="2025-09-13T00:56:21.488568181Z" level=info msg="StartContainer for \"6610111a461d0ce1c63321abf9ce630b64e64aa494f6aa7311460620017164e0\" returns successfully" Sep 13 00:56:21.703514 kubelet[1909]: I0913 00:56:21.703370 1909 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:56:21Z","lastTransitionTime":"2025-09-13T00:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:56:21.728690 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:56:22.429329 systemd[1]: run-containerd-runc-k8s.io-6610111a461d0ce1c63321abf9ce630b64e64aa494f6aa7311460620017164e0-runc.KAKjzE.mount: Deactivated successfully. Sep 13 00:56:22.439160 kubelet[1909]: E0913 00:56:22.439137 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:22.452717 kubelet[1909]: I0913 00:56:22.452643 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kt9qw" podStartSLOduration=5.452630605 podStartE2EDuration="5.452630605s" podCreationTimestamp="2025-09-13 00:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:56:22.452155244 +0000 UTC m=+92.328289055" watchObservedRunningTime="2025-09-13 00:56:22.452630605 +0000 UTC m=+92.328764426" Sep 13 00:56:23.212113 kubelet[1909]: E0913 00:56:23.212079 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:23.759641 kubelet[1909]: E0913 00:56:23.759609 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:24.214107 kubelet[1909]: E0913 00:56:24.214056 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:24.228363 systemd-networkd[1038]: lxc_health: Link UP Sep 13 00:56:24.242226 systemd-networkd[1038]: lxc_health: Gained carrier Sep 13 00:56:24.244708 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:56:25.411237 systemd-networkd[1038]: lxc_health: Gained IPv6LL Sep 13 00:56:25.760387 kubelet[1909]: E0913 00:56:25.760306 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:26.445943 kubelet[1909]: E0913 00:56:26.445914 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:27.450845 kubelet[1909]: E0913 00:56:27.448650 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:56:28.843035 systemd[1]: run-containerd-runc-k8s.io-6610111a461d0ce1c63321abf9ce630b64e64aa494f6aa7311460620017164e0-runc.mnZ9aD.mount: Deactivated successfully. Sep 13 00:56:30.926588 systemd[1]: run-containerd-runc-k8s.io-6610111a461d0ce1c63321abf9ce630b64e64aa494f6aa7311460620017164e0-runc.ms51x3.mount: Deactivated successfully. Sep 13 00:56:30.964414 sshd[3746]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:30.966776 systemd[1]: sshd@26-10.0.0.140:22-10.0.0.1:35208.service: Deactivated successfully. Sep 13 00:56:30.967452 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:56:30.967938 systemd-logind[1197]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:56:30.968517 systemd-logind[1197]: Removed session 27.