Sep 6 00:20:26.094161 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:20:26.094190 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:20:26.094198 kernel: BIOS-provided physical RAM map: Sep 6 00:20:26.094204 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 6 00:20:26.094210 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 6 00:20:26.094215 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 6 00:20:26.094222 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 6 00:20:26.094227 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 6 00:20:26.094235 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 6 00:20:26.094240 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 6 00:20:26.094246 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 6 00:20:26.094252 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 6 00:20:26.094257 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 6 00:20:26.094263 kernel: NX (Execute Disable) protection: active Sep 6 00:20:26.094273 kernel: SMBIOS 2.8 present. Sep 6 00:20:26.094281 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 6 00:20:26.094289 kernel: Hypervisor detected: KVM Sep 6 00:20:26.094297 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:20:26.094308 kernel: kvm-clock: cpu 0, msr 8819f001, primary cpu clock Sep 6 00:20:26.094315 kernel: kvm-clock: using sched offset of 3798821476 cycles Sep 6 00:20:26.094324 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:20:26.094332 kernel: tsc: Detected 2794.748 MHz processor Sep 6 00:20:26.094341 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:20:26.094352 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:20:26.094361 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 6 00:20:26.094369 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:20:26.094377 kernel: Using GB pages for direct mapping Sep 6 00:20:26.094384 kernel: ACPI: Early table checksum verification disabled Sep 6 00:20:26.094390 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 6 00:20:26.094396 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:26.094402 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:26.094408 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:26.094416 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 6 00:20:26.094422 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:26.094428 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:26.094434 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:26.094440 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:20:26.094447 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 6 00:20:26.094453 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 6 00:20:26.094459 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 6 00:20:26.094469 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 6 00:20:26.094476 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 6 00:20:26.094482 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 6 00:20:26.094489 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 6 00:20:26.094495 kernel: No NUMA configuration found Sep 6 00:20:26.094502 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 6 00:20:26.094510 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 6 00:20:26.094516 kernel: Zone ranges: Sep 6 00:20:26.094523 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:20:26.094529 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 6 00:20:26.094536 kernel: Normal empty Sep 6 00:20:26.094542 kernel: Movable zone start for each node Sep 6 00:20:26.094548 kernel: Early memory node ranges Sep 6 00:20:26.094555 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 6 00:20:26.094561 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 6 00:20:26.094570 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 6 00:20:26.094579 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:20:26.094586 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 6 00:20:26.094592 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 6 00:20:26.094598 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 6 00:20:26.094605 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:20:26.094612 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 6 00:20:26.094618 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 00:20:26.094624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:20:26.094631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:20:26.094641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:20:26.094648 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:20:26.094654 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:20:26.094661 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 00:20:26.094667 kernel: TSC deadline timer available Sep 6 00:20:26.094675 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 6 00:20:26.094684 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 6 00:20:26.094692 kernel: kvm-guest: setup PV sched yield Sep 6 00:20:26.094700 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 6 00:20:26.094711 kernel: Booting paravirtualized kernel on KVM Sep 6 00:20:26.094720 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:20:26.094729 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 6 00:20:26.094737 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 6 00:20:26.094746 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 6 00:20:26.094754 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 6 00:20:26.094763 kernel: kvm-guest: setup async PF for cpu 0 Sep 6 00:20:26.094771 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Sep 6 00:20:26.094780 kernel: kvm-guest: PV spinlocks enabled Sep 6 00:20:26.094790 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 6 00:20:26.094797 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 6 00:20:26.094804 kernel: Policy zone: DMA32 Sep 6 00:20:26.094811 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:20:26.094818 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:20:26.094825 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:20:26.094832 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:20:26.094838 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:20:26.094847 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 134796K reserved, 0K cma-reserved) Sep 6 00:20:26.094853 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 6 00:20:26.094860 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:20:26.094866 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:20:26.094873 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:20:26.094880 kernel: rcu: RCU event tracing is enabled. Sep 6 00:20:26.094887 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 6 00:20:26.094894 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:20:26.094900 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:20:26.094921 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:20:26.094927 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 6 00:20:26.094934 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 6 00:20:26.094940 kernel: random: crng init done Sep 6 00:20:26.094947 kernel: Console: colour VGA+ 80x25 Sep 6 00:20:26.094953 kernel: printk: console [ttyS0] enabled Sep 6 00:20:26.094960 kernel: ACPI: Core revision 20210730 Sep 6 00:20:26.094967 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 6 00:20:26.094973 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:20:26.094981 kernel: x2apic enabled Sep 6 00:20:26.094988 kernel: Switched APIC routing to physical x2apic. Sep 6 00:20:26.094998 kernel: kvm-guest: setup PV IPIs Sep 6 00:20:26.095004 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 6 00:20:26.095011 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 6 00:20:26.095027 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 6 00:20:26.095034 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 6 00:20:26.095041 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 6 00:20:26.095048 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 6 00:20:26.095063 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:20:26.095071 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 00:20:26.095080 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:20:26.095091 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 6 00:20:26.095100 kernel: active return thunk: retbleed_return_thunk Sep 6 00:20:26.095108 kernel: RETBleed: Mitigation: untrained return thunk Sep 6 00:20:26.095118 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 00:20:26.095127 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 00:20:26.095137 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:20:26.095148 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:20:26.095157 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:20:26.095166 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:20:26.095175 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 6 00:20:26.095184 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:20:26.095193 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:20:26.095200 kernel: LSM: Security Framework initializing Sep 6 00:20:26.095209 kernel: SELinux: Initializing. Sep 6 00:20:26.095216 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:20:26.095223 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:20:26.095230 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 6 00:20:26.095237 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 6 00:20:26.095244 kernel: ... version: 0 Sep 6 00:20:26.095250 kernel: ... bit width: 48 Sep 6 00:20:26.095257 kernel: ... generic registers: 6 Sep 6 00:20:26.095264 kernel: ... value mask: 0000ffffffffffff Sep 6 00:20:26.095273 kernel: ... max period: 00007fffffffffff Sep 6 00:20:26.095280 kernel: ... fixed-purpose events: 0 Sep 6 00:20:26.095286 kernel: ... event mask: 000000000000003f Sep 6 00:20:26.095293 kernel: signal: max sigframe size: 1776 Sep 6 00:20:26.095300 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:20:26.095307 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:20:26.095314 kernel: x86: Booting SMP configuration: Sep 6 00:20:26.095321 kernel: .... node #0, CPUs: #1 Sep 6 00:20:26.095327 kernel: kvm-clock: cpu 1, msr 8819f041, secondary cpu clock Sep 6 00:20:26.095334 kernel: kvm-guest: setup async PF for cpu 1 Sep 6 00:20:26.095342 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Sep 6 00:20:26.095349 kernel: #2 Sep 6 00:20:26.095356 kernel: kvm-clock: cpu 2, msr 8819f081, secondary cpu clock Sep 6 00:20:26.095363 kernel: kvm-guest: setup async PF for cpu 2 Sep 6 00:20:26.095370 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Sep 6 00:20:26.095380 kernel: #3 Sep 6 00:20:26.095387 kernel: kvm-clock: cpu 3, msr 8819f0c1, secondary cpu clock Sep 6 00:20:26.095394 kernel: kvm-guest: setup async PF for cpu 3 Sep 6 00:20:26.095401 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Sep 6 00:20:26.095409 kernel: smp: Brought up 1 node, 4 CPUs Sep 6 00:20:26.095416 kernel: smpboot: Max logical packages: 1 Sep 6 00:20:26.095423 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 6 00:20:26.095430 kernel: devtmpfs: initialized Sep 6 00:20:26.095437 kernel: x86/mm: Memory block size: 128MB Sep 6 00:20:26.095444 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:20:26.095451 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 6 00:20:26.095458 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:20:26.095465 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:20:26.095475 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:20:26.095484 kernel: audit: type=2000 audit(1757118025.407:1): state=initialized audit_enabled=0 res=1 Sep 6 00:20:26.095493 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:20:26.095502 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:20:26.095511 kernel: cpuidle: using governor menu Sep 6 00:20:26.095520 kernel: ACPI: bus type PCI registered Sep 6 00:20:26.095529 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:20:26.095538 kernel: dca service started, version 1.12.1 Sep 6 00:20:26.095551 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 6 00:20:26.095563 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 6 00:20:26.095572 kernel: PCI: Using configuration type 1 for base access Sep 6 00:20:26.095581 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:20:26.095590 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:20:26.095598 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:20:26.095605 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:20:26.095612 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:20:26.095619 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:20:26.095626 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:20:26.095635 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:20:26.095642 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:20:26.095649 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:20:26.095656 kernel: ACPI: Interpreter enabled Sep 6 00:20:26.095662 kernel: ACPI: PM: (supports S0 S3 S5) Sep 6 00:20:26.095669 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:20:26.095676 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:20:26.095683 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 6 00:20:26.095690 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:20:26.095869 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:20:26.096125 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 6 00:20:26.096203 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 6 00:20:26.096213 kernel: PCI host bridge to bus 0000:00 Sep 6 00:20:26.096375 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:20:26.096455 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:20:26.098040 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:20:26.098138 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 6 00:20:26.098207 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 6 00:20:26.098272 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 6 00:20:26.098342 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:20:26.098477 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 6 00:20:26.098581 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 6 00:20:26.098691 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 6 00:20:26.098791 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 6 00:20:26.098868 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 6 00:20:26.098980 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 00:20:26.099129 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 6 00:20:26.099214 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 6 00:20:26.099295 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 6 00:20:26.099386 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 6 00:20:26.099502 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 6 00:20:26.099580 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 6 00:20:26.099667 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 6 00:20:26.099744 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 6 00:20:26.099838 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 6 00:20:26.099976 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 6 00:20:26.100075 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 6 00:20:26.100157 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 6 00:20:26.100245 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 6 00:20:26.100351 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 6 00:20:26.100438 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 6 00:20:26.100538 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 6 00:20:26.100618 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 6 00:20:26.100703 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 6 00:20:26.100799 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 6 00:20:26.100878 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 6 00:20:26.100888 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:20:26.100895 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:20:26.100902 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:20:26.100938 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:20:26.100948 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 6 00:20:26.100955 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 6 00:20:26.100962 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 6 00:20:26.100969 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 6 00:20:26.100976 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 6 00:20:26.100983 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 6 00:20:26.100990 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 6 00:20:26.100997 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 6 00:20:26.101004 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 6 00:20:26.101012 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 6 00:20:26.101028 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 6 00:20:26.101036 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 6 00:20:26.101048 kernel: iommu: Default domain type: Translated Sep 6 00:20:26.101057 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:20:26.101145 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 6 00:20:26.101221 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 00:20:26.101295 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 6 00:20:26.101310 kernel: vgaarb: loaded Sep 6 00:20:26.101320 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:20:26.101329 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:20:26.101336 kernel: PTP clock support registered Sep 6 00:20:26.101343 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:20:26.101350 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:20:26.101357 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 6 00:20:26.101363 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 6 00:20:26.101370 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 6 00:20:26.101379 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 6 00:20:26.101386 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:20:26.101393 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:20:26.101400 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:20:26.101407 kernel: pnp: PnP ACPI init Sep 6 00:20:26.101508 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 6 00:20:26.101519 kernel: pnp: PnP ACPI: found 6 devices Sep 6 00:20:26.101527 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:20:26.101536 kernel: NET: Registered PF_INET protocol family Sep 6 00:20:26.101543 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:20:26.101550 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:20:26.101557 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:20:26.101564 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:20:26.101571 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 00:20:26.101578 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:20:26.101585 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:20:26.101592 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:20:26.101601 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:20:26.101607 kernel: NET: Registered PF_XDP protocol family Sep 6 00:20:26.101682 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:20:26.101749 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:20:26.101814 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:20:26.101884 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 6 00:20:26.104039 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 6 00:20:26.104138 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 6 00:20:26.104154 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:20:26.104164 kernel: Initialise system trusted keyrings Sep 6 00:20:26.104173 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:20:26.104182 kernel: Key type asymmetric registered Sep 6 00:20:26.104192 kernel: Asymmetric key parser 'x509' registered Sep 6 00:20:26.104202 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:20:26.104211 kernel: io scheduler mq-deadline registered Sep 6 00:20:26.104220 kernel: io scheduler kyber registered Sep 6 00:20:26.104228 kernel: io scheduler bfq registered Sep 6 00:20:26.104238 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:20:26.104272 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 6 00:20:26.104284 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 6 00:20:26.104293 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 6 00:20:26.104303 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:20:26.104312 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:20:26.104321 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:20:26.104330 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:20:26.104340 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:20:26.104505 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 6 00:20:26.104538 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:20:26.104629 kernel: rtc_cmos 00:04: registered as rtc0 Sep 6 00:20:26.104721 kernel: rtc_cmos 00:04: setting system clock to 2025-09-06T00:20:25 UTC (1757118025) Sep 6 00:20:26.104810 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 6 00:20:26.104821 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:20:26.104828 kernel: Segment Routing with IPv6 Sep 6 00:20:26.104835 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:20:26.104842 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:20:26.104853 kernel: Key type dns_resolver registered Sep 6 00:20:26.104860 kernel: IPI shorthand broadcast: enabled Sep 6 00:20:26.104867 kernel: sched_clock: Marking stable (417576978, 102528875)->(574651697, -54545844) Sep 6 00:20:26.104874 kernel: registered taskstats version 1 Sep 6 00:20:26.104896 kernel: Loading compiled-in X.509 certificates Sep 6 00:20:26.104905 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:20:26.104924 kernel: Key type .fscrypt registered Sep 6 00:20:26.104931 kernel: Key type fscrypt-provisioning registered Sep 6 00:20:26.104938 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:20:26.104948 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:20:26.104955 kernel: ima: No architecture policies found Sep 6 00:20:26.104962 kernel: clk: Disabling unused clocks Sep 6 00:20:26.104969 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:20:26.104976 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:20:26.104983 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:20:26.104990 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:20:26.104997 kernel: Run /init as init process Sep 6 00:20:26.105006 kernel: with arguments: Sep 6 00:20:26.105013 kernel: /init Sep 6 00:20:26.105028 kernel: with environment: Sep 6 00:20:26.105035 kernel: HOME=/ Sep 6 00:20:26.105053 kernel: TERM=linux Sep 6 00:20:26.105065 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:20:26.105075 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:20:26.105085 systemd[1]: Detected virtualization kvm. Sep 6 00:20:26.105095 systemd[1]: Detected architecture x86-64. Sep 6 00:20:26.105102 systemd[1]: Running in initrd. Sep 6 00:20:26.105110 systemd[1]: No hostname configured, using default hostname. Sep 6 00:20:26.105118 systemd[1]: Hostname set to . Sep 6 00:20:26.105126 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:20:26.105133 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:20:26.105141 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:20:26.105148 systemd[1]: Reached target cryptsetup.target. Sep 6 00:20:26.105155 systemd[1]: Reached target paths.target. Sep 6 00:20:26.105164 systemd[1]: Reached target slices.target. Sep 6 00:20:26.105198 systemd[1]: Reached target swap.target. Sep 6 00:20:26.105207 systemd[1]: Reached target timers.target. Sep 6 00:20:26.105215 systemd[1]: Listening on iscsid.socket. Sep 6 00:20:26.105223 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:20:26.105232 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:20:26.105240 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:20:26.105248 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:20:26.105255 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:20:26.105263 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:20:26.105271 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:20:26.105279 systemd[1]: Reached target sockets.target. Sep 6 00:20:26.105286 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:20:26.105309 systemd[1]: Finished network-cleanup.service. Sep 6 00:20:26.105321 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:20:26.105329 systemd[1]: Starting systemd-journald.service... Sep 6 00:20:26.105336 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:20:26.105344 systemd[1]: Starting systemd-resolved.service... Sep 6 00:20:26.105352 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:20:26.105360 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:20:26.105368 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:20:26.105375 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:20:26.105388 systemd-journald[198]: Journal started Sep 6 00:20:26.105456 systemd-journald[198]: Runtime Journal (/run/log/journal/21c160b1fa2e41d9a044a398ead963c6) is 6.0M, max 48.5M, 42.5M free. Sep 6 00:20:26.094047 systemd-modules-load[199]: Inserted module 'overlay' Sep 6 00:20:26.133883 systemd[1]: Started systemd-journald.service. Sep 6 00:20:26.133951 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:20:26.133964 kernel: audit: type=1130 audit(1757118026.127:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.133975 kernel: Bridge firewalling registered Sep 6 00:20:26.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.115798 systemd-resolved[200]: Positive Trust Anchors: Sep 6 00:20:26.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.115810 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:20:26.140138 kernel: audit: type=1130 audit(1757118026.133:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.140159 kernel: audit: type=1130 audit(1757118026.137:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.115840 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:20:26.118723 systemd-resolved[200]: Defaulting to hostname 'linux'. Sep 6 00:20:26.128323 systemd[1]: Started systemd-resolved.service. Sep 6 00:20:26.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.133771 systemd-modules-load[199]: Inserted module 'br_netfilter' Sep 6 00:20:26.134844 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:20:26.138265 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:20:26.143139 systemd[1]: Reached target nss-lookup.target. Sep 6 00:20:26.143999 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:20:26.158984 kernel: audit: type=1130 audit(1757118026.142:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.160940 kernel: SCSI subsystem initialized Sep 6 00:20:26.162817 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:20:26.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.165432 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:20:26.169222 kernel: audit: type=1130 audit(1757118026.163:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.173095 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:20:26.173119 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:20:26.174307 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:20:26.176365 dracut-cmdline[217]: dracut-dracut-053 Sep 6 00:20:26.177896 systemd-modules-load[199]: Inserted module 'dm_multipath' Sep 6 00:20:26.178781 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:20:26.183395 kernel: audit: type=1130 audit(1757118026.177:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.179699 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:20:26.185697 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:20:26.190259 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:20:26.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.194940 kernel: audit: type=1130 audit(1757118026.190:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.252947 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:20:26.268940 kernel: iscsi: registered transport (tcp) Sep 6 00:20:26.289940 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:20:26.289971 kernel: QLogic iSCSI HBA Driver Sep 6 00:20:26.321328 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:20:26.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.322235 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:20:26.327165 kernel: audit: type=1130 audit(1757118026.320:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.368942 kernel: raid6: avx2x4 gen() 27351 MB/s Sep 6 00:20:26.385940 kernel: raid6: avx2x4 xor() 6618 MB/s Sep 6 00:20:26.402945 kernel: raid6: avx2x2 gen() 25253 MB/s Sep 6 00:20:26.419942 kernel: raid6: avx2x2 xor() 18763 MB/s Sep 6 00:20:26.436950 kernel: raid6: avx2x1 gen() 24090 MB/s Sep 6 00:20:26.453965 kernel: raid6: avx2x1 xor() 13844 MB/s Sep 6 00:20:26.470946 kernel: raid6: sse2x4 gen() 13293 MB/s Sep 6 00:20:26.487947 kernel: raid6: sse2x4 xor() 6557 MB/s Sep 6 00:20:26.504949 kernel: raid6: sse2x2 gen() 15249 MB/s Sep 6 00:20:26.521953 kernel: raid6: sse2x2 xor() 8102 MB/s Sep 6 00:20:26.538940 kernel: raid6: sse2x1 gen() 12303 MB/s Sep 6 00:20:26.556446 kernel: raid6: sse2x1 xor() 7631 MB/s Sep 6 00:20:26.556466 kernel: raid6: using algorithm avx2x4 gen() 27351 MB/s Sep 6 00:20:26.556475 kernel: raid6: .... xor() 6618 MB/s, rmw enabled Sep 6 00:20:26.557233 kernel: raid6: using avx2x2 recovery algorithm Sep 6 00:20:26.571964 kernel: xor: automatically using best checksumming function avx Sep 6 00:20:26.667952 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:20:26.678215 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:20:26.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.679000 audit: BPF prog-id=7 op=LOAD Sep 6 00:20:26.682000 audit: BPF prog-id=8 op=LOAD Sep 6 00:20:26.683506 systemd[1]: Starting systemd-udevd.service... Sep 6 00:20:26.685051 kernel: audit: type=1130 audit(1757118026.677:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.700154 systemd-udevd[401]: Using default interface naming scheme 'v252'. Sep 6 00:20:26.704373 systemd[1]: Started systemd-udevd.service. Sep 6 00:20:26.705411 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:20:26.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.716839 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Sep 6 00:20:26.744504 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:20:26.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.746182 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:20:26.784359 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:20:26.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:26.817247 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 6 00:20:26.823182 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:20:26.823201 kernel: GPT:9289727 != 19775487 Sep 6 00:20:26.823210 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:20:26.823219 kernel: GPT:9289727 != 19775487 Sep 6 00:20:26.823227 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:20:26.823236 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:20:26.825953 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:20:26.837489 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:20:26.837515 kernel: AES CTR mode by8 optimization enabled Sep 6 00:20:26.837533 kernel: libata version 3.00 loaded. Sep 6 00:20:26.851032 kernel: ahci 0000:00:1f.2: version 3.0 Sep 6 00:20:26.870435 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 6 00:20:26.870453 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 6 00:20:26.870547 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 6 00:20:26.870626 kernel: scsi host0: ahci Sep 6 00:20:26.870731 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (452) Sep 6 00:20:26.870742 kernel: scsi host1: ahci Sep 6 00:20:26.870831 kernel: scsi host2: ahci Sep 6 00:20:26.870940 kernel: scsi host3: ahci Sep 6 00:20:26.871053 kernel: scsi host4: ahci Sep 6 00:20:26.871145 kernel: scsi host5: ahci Sep 6 00:20:26.871234 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 6 00:20:26.871245 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 6 00:20:26.871254 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 6 00:20:26.871262 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 6 00:20:26.871271 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 6 00:20:26.871280 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 6 00:20:26.865543 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:20:26.895366 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:20:26.897296 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:20:26.902433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:20:26.906363 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:20:26.908013 systemd[1]: Starting disk-uuid.service... Sep 6 00:20:26.917098 disk-uuid[527]: Primary Header is updated. Sep 6 00:20:26.917098 disk-uuid[527]: Secondary Entries is updated. Sep 6 00:20:26.917098 disk-uuid[527]: Secondary Header is updated. Sep 6 00:20:26.921189 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:20:26.978954 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:20:27.181618 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 6 00:20:27.181705 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 6 00:20:27.181716 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 6 00:20:27.183517 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 6 00:20:27.183598 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 6 00:20:27.184957 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 6 00:20:27.185955 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 6 00:20:27.187392 kernel: ata3.00: applying bridge limits Sep 6 00:20:27.187427 kernel: ata3.00: configured for UDMA/100 Sep 6 00:20:27.187938 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 6 00:20:27.225964 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 6 00:20:27.242866 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 6 00:20:27.242885 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 6 00:20:27.949931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:20:27.950210 disk-uuid[528]: The operation has completed successfully. Sep 6 00:20:28.005471 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:20:28.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.005567 systemd[1]: Finished disk-uuid.service. Sep 6 00:20:28.006467 systemd[1]: Starting verity-setup.service... Sep 6 00:20:28.019939 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 6 00:20:28.039424 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:20:28.041526 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:20:28.043556 systemd[1]: Finished verity-setup.service. Sep 6 00:20:28.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.102938 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:20:28.103380 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:20:28.104218 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:20:28.104850 systemd[1]: Starting ignition-setup.service... Sep 6 00:20:28.107312 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:20:28.116489 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:20:28.116522 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:20:28.116536 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:20:28.124436 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:20:28.173406 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:20:28.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.174000 audit: BPF prog-id=9 op=LOAD Sep 6 00:20:28.175489 systemd[1]: Starting systemd-networkd.service... Sep 6 00:20:28.195756 systemd-networkd[713]: lo: Link UP Sep 6 00:20:28.195765 systemd-networkd[713]: lo: Gained carrier Sep 6 00:20:28.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.196213 systemd-networkd[713]: Enumeration completed Sep 6 00:20:28.196303 systemd[1]: Started systemd-networkd.service. Sep 6 00:20:28.196501 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:20:28.220515 systemd-networkd[713]: eth0: Link UP Sep 6 00:20:28.220519 systemd-networkd[713]: eth0: Gained carrier Sep 6 00:20:28.221109 systemd[1]: Reached target network.target. Sep 6 00:20:28.223388 systemd[1]: Starting iscsiuio.service... Sep 6 00:20:28.229865 systemd[1]: Finished ignition-setup.service. Sep 6 00:20:28.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.231829 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:20:28.310346 systemd[1]: Started iscsiuio.service. Sep 6 00:20:28.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.313338 systemd[1]: Starting iscsid.service... Sep 6 00:20:28.315174 systemd-networkd[713]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:20:28.316676 iscsid[720]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:20:28.316676 iscsid[720]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:20:28.316676 iscsid[720]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:20:28.316676 iscsid[720]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:20:28.316676 iscsid[720]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:20:28.316676 iscsid[720]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:20:28.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.317721 systemd[1]: Started iscsid.service. Sep 6 00:20:28.326320 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:20:28.339509 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:20:28.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.340737 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:20:28.342491 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:20:28.343552 systemd[1]: Reached target remote-fs.target. Sep 6 00:20:28.345336 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:20:28.352541 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:20:28.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.371681 ignition[718]: Ignition 2.14.0 Sep 6 00:20:28.371693 ignition[718]: Stage: fetch-offline Sep 6 00:20:28.371766 ignition[718]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:20:28.371778 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:20:28.371901 ignition[718]: parsed url from cmdline: "" Sep 6 00:20:28.371904 ignition[718]: no config URL provided Sep 6 00:20:28.371921 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:20:28.371930 ignition[718]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:20:28.371950 ignition[718]: op(1): [started] loading QEMU firmware config module Sep 6 00:20:28.371965 ignition[718]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 6 00:20:28.376541 ignition[718]: op(1): [finished] loading QEMU firmware config module Sep 6 00:20:28.417500 ignition[718]: parsing config with SHA512: 486a07f26b522168f1cd3f59933ab87e4c95c3a2b2c3862975f6552eb322664c7bbd1066716da008a0b5ce6d74824ad837204a8b3af0e966aadad73c083cc5d5 Sep 6 00:20:28.427806 unknown[718]: fetched base config from "system" Sep 6 00:20:28.427818 unknown[718]: fetched user config from "qemu" Sep 6 00:20:28.428462 ignition[718]: fetch-offline: fetch-offline passed Sep 6 00:20:28.428519 ignition[718]: Ignition finished successfully Sep 6 00:20:28.431850 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:20:28.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.432052 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 00:20:28.433896 systemd[1]: Starting ignition-kargs.service... Sep 6 00:20:28.448879 ignition[741]: Ignition 2.14.0 Sep 6 00:20:28.448889 ignition[741]: Stage: kargs Sep 6 00:20:28.449016 ignition[741]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:20:28.449025 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:20:28.452752 ignition[741]: kargs: kargs passed Sep 6 00:20:28.452794 ignition[741]: Ignition finished successfully Sep 6 00:20:28.455252 systemd[1]: Finished ignition-kargs.service. Sep 6 00:20:28.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.457122 systemd[1]: Starting ignition-disks.service... Sep 6 00:20:28.549975 ignition[747]: Ignition 2.14.0 Sep 6 00:20:28.549987 ignition[747]: Stage: disks Sep 6 00:20:28.550155 ignition[747]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:20:28.550165 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:20:28.552459 ignition[747]: disks: disks passed Sep 6 00:20:28.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.553753 systemd[1]: Finished ignition-disks.service. Sep 6 00:20:28.552513 ignition[747]: Ignition finished successfully Sep 6 00:20:28.554659 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:20:28.556039 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:20:28.556802 systemd[1]: Reached target local-fs.target. Sep 6 00:20:28.557550 systemd[1]: Reached target sysinit.target. Sep 6 00:20:28.558298 systemd[1]: Reached target basic.target. Sep 6 00:20:28.560628 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:20:28.572337 systemd-fsck[755]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 00:20:28.733710 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:20:28.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.735482 systemd[1]: Mounting sysroot.mount... Sep 6 00:20:28.744936 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:20:28.745152 systemd[1]: Mounted sysroot.mount. Sep 6 00:20:28.746467 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:20:28.748182 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:20:28.748612 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:20:28.748684 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:20:28.748717 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:20:28.756797 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:20:28.757613 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:20:28.762767 initrd-setup-root[765]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:20:28.766241 initrd-setup-root[773]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:20:28.770438 initrd-setup-root[781]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:20:28.774531 initrd-setup-root[789]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:20:28.802803 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:20:28.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.803655 systemd[1]: Starting ignition-mount.service... Sep 6 00:20:28.805787 systemd[1]: Starting sysroot-boot.service... Sep 6 00:20:28.809895 bash[806]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:20:28.820347 ignition[807]: INFO : Ignition 2.14.0 Sep 6 00:20:28.820347 ignition[807]: INFO : Stage: mount Sep 6 00:20:28.822106 ignition[807]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:20:28.822106 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:20:28.822106 ignition[807]: INFO : mount: mount passed Sep 6 00:20:28.822106 ignition[807]: INFO : Ignition finished successfully Sep 6 00:20:28.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:28.823722 systemd[1]: Finished ignition-mount.service. Sep 6 00:20:28.826548 systemd[1]: Finished sysroot-boot.service. Sep 6 00:20:29.050520 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:20:29.163543 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Sep 6 00:20:29.163640 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:20:29.163655 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:20:29.164292 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:20:29.169062 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:20:29.170993 systemd[1]: Starting ignition-files.service... Sep 6 00:20:29.191794 ignition[837]: INFO : Ignition 2.14.0 Sep 6 00:20:29.191794 ignition[837]: INFO : Stage: files Sep 6 00:20:29.193580 ignition[837]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:20:29.193580 ignition[837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:20:29.193580 ignition[837]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:20:29.197299 ignition[837]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:20:29.197299 ignition[837]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:20:29.197299 ignition[837]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:20:29.197299 ignition[837]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:20:29.203193 ignition[837]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:20:29.203193 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 6 00:20:29.203193 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 6 00:20:29.198170 unknown[837]: wrote ssh authorized keys file for user: core Sep 6 00:20:29.290760 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 00:20:29.382061 systemd-networkd[713]: eth0: Gained IPv6LL Sep 6 00:20:29.419300 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 6 00:20:29.421419 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:20:29.421419 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 00:20:29.507978 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:20:29.626155 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:20:29.626155 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:20:29.629652 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:20:29.629652 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:20:29.632960 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:20:29.634580 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:20:29.636233 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:20:29.637860 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:20:29.639751 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:20:29.639751 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:20:29.643130 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:20:29.643130 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 6 00:20:29.643130 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 6 00:20:29.643130 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 6 00:20:29.651948 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 6 00:20:29.941626 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 6 00:20:30.534286 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 6 00:20:30.534286 ignition[837]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 6 00:20:30.538037 ignition[837]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:20:30.538037 ignition[837]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:20:30.538037 ignition[837]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 6 00:20:30.538037 ignition[837]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 6 00:20:30.538037 ignition[837]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:20:30.538037 ignition[837]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:20:30.538037 ignition[837]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 6 00:20:30.538037 ignition[837]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:20:30.538037 ignition[837]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:20:30.538037 ignition[837]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Sep 6 00:20:30.538037 ignition[837]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:20:30.572396 ignition[837]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:20:30.573985 ignition[837]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Sep 6 00:20:30.575392 ignition[837]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:20:30.577092 ignition[837]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:20:30.578730 ignition[837]: INFO : files: files passed Sep 6 00:20:30.578730 ignition[837]: INFO : Ignition finished successfully Sep 6 00:20:30.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.579869 systemd[1]: Finished ignition-files.service. Sep 6 00:20:30.586581 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 6 00:20:30.586610 kernel: audit: type=1130 audit(1757118030.581:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.582167 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:20:30.586579 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:20:30.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.591147 initrd-setup-root-after-ignition[861]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 6 00:20:30.587304 systemd[1]: Starting ignition-quench.service... Sep 6 00:20:30.602368 kernel: audit: type=1130 audit(1757118030.591:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.602397 kernel: audit: type=1130 audit(1757118030.595:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.602408 kernel: audit: type=1131 audit(1757118030.595:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.602540 initrd-setup-root-after-ignition[864]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:20:30.588662 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:20:30.591277 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:20:30.591354 systemd[1]: Finished ignition-quench.service. Sep 6 00:20:30.596118 systemd[1]: Reached target ignition-complete.target. Sep 6 00:20:30.604602 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:20:30.620286 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:20:30.620377 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:20:30.628116 kernel: audit: type=1130 audit(1757118030.622:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.628136 kernel: audit: type=1131 audit(1757118030.622:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.622149 systemd[1]: Reached target initrd-fs.target. Sep 6 00:20:30.628882 systemd[1]: Reached target initrd.target. Sep 6 00:20:30.629126 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:20:30.629812 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:20:30.638614 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:20:30.642731 kernel: audit: type=1130 audit(1757118030.637:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.642788 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:20:30.651564 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:20:30.652443 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:20:30.654020 systemd[1]: Stopped target timers.target. Sep 6 00:20:30.654825 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:20:30.660351 kernel: audit: type=1131 audit(1757118030.655:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.654941 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:20:30.656954 systemd[1]: Stopped target initrd.target. Sep 6 00:20:30.661163 systemd[1]: Stopped target basic.target. Sep 6 00:20:30.661916 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:20:30.664023 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:20:30.665478 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:20:30.666305 systemd[1]: Stopped target remote-fs.target. Sep 6 00:20:30.668535 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:20:30.670147 systemd[1]: Stopped target sysinit.target. Sep 6 00:20:30.670860 systemd[1]: Stopped target local-fs.target. Sep 6 00:20:30.672980 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:20:30.673683 systemd[1]: Stopped target swap.target. Sep 6 00:20:30.680448 kernel: audit: type=1131 audit(1757118030.675:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.675116 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:20:30.675222 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:20:30.686249 kernel: audit: type=1131 audit(1757118030.681:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.676636 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:20:30.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.681306 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:20:30.681393 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:20:30.682289 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:20:30.682416 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:20:30.687203 systemd[1]: Stopped target paths.target. Sep 6 00:20:30.688575 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:20:30.691966 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:20:30.694298 systemd[1]: Stopped target slices.target. Sep 6 00:20:30.695132 systemd[1]: Stopped target sockets.target. Sep 6 00:20:30.697685 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:20:30.697777 systemd[1]: Closed iscsid.socket. Sep 6 00:20:30.699760 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:20:30.699853 systemd[1]: Closed iscsiuio.socket. Sep 6 00:20:30.701966 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:20:30.702076 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:20:30.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.703251 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:20:30.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.703378 systemd[1]: Stopped ignition-files.service. Sep 6 00:20:30.706582 systemd[1]: Stopping ignition-mount.service... Sep 6 00:20:30.708423 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:20:30.708577 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:20:30.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.711079 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:20:30.711771 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:20:30.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.714331 ignition[878]: INFO : Ignition 2.14.0 Sep 6 00:20:30.714331 ignition[878]: INFO : Stage: umount Sep 6 00:20:30.714331 ignition[878]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:20:30.714331 ignition[878]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:20:30.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.711935 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:20:30.721030 ignition[878]: INFO : umount: umount passed Sep 6 00:20:30.721030 ignition[878]: INFO : Ignition finished successfully Sep 6 00:20:30.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.713431 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:20:30.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.713543 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:20:30.716882 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:20:30.717003 systemd[1]: Stopped ignition-mount.service. Sep 6 00:20:30.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.718089 systemd[1]: Stopped target network.target. Sep 6 00:20:30.720198 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:20:30.720239 systemd[1]: Stopped ignition-disks.service. Sep 6 00:20:30.721705 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:20:30.721740 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:20:30.722510 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:20:30.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.722544 systemd[1]: Stopped ignition-setup.service. Sep 6 00:20:30.722900 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:20:30.726230 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:20:30.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.727830 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:20:30.727937 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:20:30.740000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:20:30.731953 systemd-networkd[713]: eth0: DHCPv6 lease lost Sep 6 00:20:30.741000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:20:30.733751 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:20:30.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.733838 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:20:30.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.737155 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:20:30.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.737237 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:20:30.740065 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:20:30.740094 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:20:30.742175 systemd[1]: Stopping network-cleanup.service... Sep 6 00:20:30.742943 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:20:30.742983 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:20:30.744526 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:20:30.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.744561 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:20:30.746859 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:20:30.746901 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:20:30.747903 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:20:30.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.752950 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:20:30.755936 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:20:30.756023 systemd[1]: Stopped network-cleanup.service. Sep 6 00:20:30.758738 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:20:30.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.759542 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:20:30.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.759642 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:20:30.762029 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:20:30.762062 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:20:30.764231 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:20:30.764269 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:20:30.765989 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:20:30.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.766023 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:20:30.767481 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:20:30.767512 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:20:30.769098 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:20:30.769128 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:20:30.769762 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:20:30.769932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:20:30.769980 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:20:30.776557 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:20:30.776625 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:20:30.858329 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:20:30.858413 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:20:30.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.860131 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:20:30.861540 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:20:30.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:30.861597 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:20:30.863833 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:20:30.880378 systemd[1]: Switching root. Sep 6 00:20:30.898402 iscsid[720]: iscsid shutting down. Sep 6 00:20:30.899095 systemd-journald[198]: Received SIGTERM from PID 1 (n/a). Sep 6 00:20:30.899133 systemd-journald[198]: Journal stopped Sep 6 00:20:33.618253 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:20:33.618309 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:20:33.618325 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:20:33.618335 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:20:33.618345 kernel: SELinux: policy capability open_perms=1 Sep 6 00:20:33.618357 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:20:33.618371 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:20:33.618381 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:20:33.618390 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:20:33.618401 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:20:33.618414 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:20:33.618425 systemd[1]: Successfully loaded SELinux policy in 37.383ms. Sep 6 00:20:33.618441 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.947ms. Sep 6 00:20:33.618452 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:20:33.618463 systemd[1]: Detected virtualization kvm. Sep 6 00:20:33.618473 systemd[1]: Detected architecture x86-64. Sep 6 00:20:33.618484 systemd[1]: Detected first boot. Sep 6 00:20:33.618495 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:20:33.618506 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:20:33.618517 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:20:33.618527 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:20:33.618542 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:20:33.618555 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:20:33.618567 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:20:33.618578 systemd[1]: Stopped iscsiuio.service. Sep 6 00:20:33.618590 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:20:33.618602 systemd[1]: Stopped iscsid.service. Sep 6 00:20:33.618612 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:20:33.618625 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:20:33.618636 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:20:33.618647 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:20:33.618658 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:20:33.618672 systemd[1]: Created slice system-getty.slice. Sep 6 00:20:33.618683 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:20:33.618694 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:20:33.618705 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:20:33.618715 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:20:33.618725 systemd[1]: Created slice user.slice. Sep 6 00:20:33.618736 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:20:33.618746 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:20:33.618758 systemd[1]: Set up automount boot.automount. Sep 6 00:20:33.618769 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:20:33.618779 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:20:33.618790 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:20:33.618801 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:20:33.618823 systemd[1]: Reached target integritysetup.target. Sep 6 00:20:33.618837 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:20:33.618852 systemd[1]: Reached target remote-fs.target. Sep 6 00:20:33.618866 systemd[1]: Reached target slices.target. Sep 6 00:20:33.618878 systemd[1]: Reached target swap.target. Sep 6 00:20:33.618892 systemd[1]: Reached target torcx.target. Sep 6 00:20:33.618904 systemd[1]: Reached target veritysetup.target. Sep 6 00:20:33.618928 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:20:33.618939 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:20:33.618949 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:20:33.618960 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:20:33.618970 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:20:33.618983 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:20:33.618994 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:20:33.619004 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:20:33.619014 systemd[1]: Mounting media.mount... Sep 6 00:20:33.619024 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:33.619035 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:20:33.619045 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:20:33.619057 systemd[1]: Mounting tmp.mount... Sep 6 00:20:33.619067 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:20:33.619079 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:33.619090 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:20:33.619101 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:20:33.619111 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:33.619121 systemd[1]: Starting modprobe@drm.service... Sep 6 00:20:33.619131 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:33.619141 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:20:33.619152 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:33.619162 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:20:33.619175 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:20:33.619185 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:20:33.619195 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:20:33.619206 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:20:33.619216 systemd[1]: Stopped systemd-journald.service. Sep 6 00:20:33.619226 kernel: fuse: init (API version 7.34) Sep 6 00:20:33.619235 kernel: loop: module loaded Sep 6 00:20:33.619246 systemd[1]: Starting systemd-journald.service... Sep 6 00:20:33.619257 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:20:33.619269 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:20:33.619280 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:20:33.619291 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:20:33.619301 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:20:33.619312 systemd[1]: Stopped verity-setup.service. Sep 6 00:20:33.619323 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:33.619333 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:20:33.619346 systemd-journald[989]: Journal started Sep 6 00:20:33.619387 systemd-journald[989]: Runtime Journal (/run/log/journal/21c160b1fa2e41d9a044a398ead963c6) is 6.0M, max 48.5M, 42.5M free. Sep 6 00:20:30.954000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:20:31.264000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:20:31.264000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:20:31.264000 audit: BPF prog-id=10 op=LOAD Sep 6 00:20:31.264000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:20:31.264000 audit: BPF prog-id=11 op=LOAD Sep 6 00:20:31.264000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:20:31.297000 audit[912]: AVC avc: denied { associate } for pid=912 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:20:31.297000 audit[912]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00018f8cc a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=895 pid=912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:31.297000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:20:31.299000 audit[912]: AVC avc: denied { associate } for pid=912 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:20:31.299000 audit[912]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018f9a5 a2=1ed a3=0 items=2 ppid=895 pid=912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:31.299000 audit: CWD cwd="/" Sep 6 00:20:31.299000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:31.299000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:31.299000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:20:33.477000 audit: BPF prog-id=12 op=LOAD Sep 6 00:20:33.477000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:20:33.477000 audit: BPF prog-id=13 op=LOAD Sep 6 00:20:33.477000 audit: BPF prog-id=14 op=LOAD Sep 6 00:20:33.620143 systemd[1]: Started systemd-journald.service. Sep 6 00:20:33.477000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:20:33.477000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:20:33.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.491000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:20:33.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.596000 audit: BPF prog-id=15 op=LOAD Sep 6 00:20:33.596000 audit: BPF prog-id=16 op=LOAD Sep 6 00:20:33.596000 audit: BPF prog-id=17 op=LOAD Sep 6 00:20:33.596000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:20:33.596000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:20:33.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.616000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:20:33.616000 audit[989]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc3cb58880 a2=4000 a3=7ffc3cb5891c items=0 ppid=1 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:33.616000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:20:33.476045 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:20:31.296778 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:20:33.476062 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 00:20:31.297038 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:20:33.480021 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:20:31.297073 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:20:31.297103 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:20:31.297112 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:20:31.297141 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:20:31.297153 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:20:31.297349 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:20:31.297382 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:20:33.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:31.297394 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:20:31.298022 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:20:31.298052 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:20:31.298068 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:20:33.621888 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:20:31.298081 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:20:31.298096 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:20:31.298109 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:20:33.197033 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:33Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:20:33.197306 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:33Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:20:33.197418 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:33Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:20:33.197574 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:33Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:20:33.197624 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:33Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:20:33.197691 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-09-06T00:20:33Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:20:33.622870 systemd[1]: Mounted media.mount. Sep 6 00:20:33.623692 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:20:33.624583 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:20:33.625502 systemd[1]: Mounted tmp.mount. Sep 6 00:20:33.626473 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:20:33.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.627616 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:20:33.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.628717 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:20:33.628880 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:20:33.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.629986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:33.630124 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:33.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.631191 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:20:33.631356 systemd[1]: Finished modprobe@drm.service. Sep 6 00:20:33.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.632508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:33.632687 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:33.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.633783 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:20:33.633972 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:20:33.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.635100 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:33.635283 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:33.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.636359 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:20:33.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.637503 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:20:33.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.638787 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:20:33.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.640107 systemd[1]: Reached target network-pre.target. Sep 6 00:20:33.642129 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:20:33.644077 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:20:33.644869 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:20:33.647174 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:20:33.649157 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:20:33.650341 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:33.651437 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:20:33.652515 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:33.653604 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:20:33.654415 systemd-journald[989]: Time spent on flushing to /var/log/journal/21c160b1fa2e41d9a044a398ead963c6 is 19.912ms for 1091 entries. Sep 6 00:20:33.654415 systemd-journald[989]: System Journal (/var/log/journal/21c160b1fa2e41d9a044a398ead963c6) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:20:33.690029 systemd-journald[989]: Received client request to flush runtime journal. Sep 6 00:20:33.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.656783 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:20:33.660297 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:20:33.661261 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:20:33.690715 udevadm[1016]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 6 00:20:33.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:33.662317 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:20:33.663486 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:20:33.664733 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:20:33.666878 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:20:33.668900 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:20:33.679156 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:20:33.690722 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:20:34.080517 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:20:34.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.081000 audit: BPF prog-id=18 op=LOAD Sep 6 00:20:34.081000 audit: BPF prog-id=19 op=LOAD Sep 6 00:20:34.081000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:20:34.081000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:20:34.082830 systemd[1]: Starting systemd-udevd.service... Sep 6 00:20:34.098039 systemd-udevd[1018]: Using default interface naming scheme 'v252'. Sep 6 00:20:34.110755 systemd[1]: Started systemd-udevd.service. Sep 6 00:20:34.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.111000 audit: BPF prog-id=20 op=LOAD Sep 6 00:20:34.113455 systemd[1]: Starting systemd-networkd.service... Sep 6 00:20:34.116000 audit: BPF prog-id=21 op=LOAD Sep 6 00:20:34.116000 audit: BPF prog-id=22 op=LOAD Sep 6 00:20:34.116000 audit: BPF prog-id=23 op=LOAD Sep 6 00:20:34.118234 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:20:34.145799 systemd[1]: Started systemd-userdbd.service. Sep 6 00:20:34.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.154232 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 00:20:34.173407 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:20:34.184943 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:20:34.189410 systemd-networkd[1025]: lo: Link UP Sep 6 00:20:34.189420 systemd-networkd[1025]: lo: Gained carrier Sep 6 00:20:34.189941 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:20:34.190050 systemd-networkd[1025]: Enumeration completed Sep 6 00:20:34.190130 systemd[1]: Started systemd-networkd.service. Sep 6 00:20:34.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.191851 systemd-networkd[1025]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:20:34.193731 systemd-networkd[1025]: eth0: Link UP Sep 6 00:20:34.193740 systemd-networkd[1025]: eth0: Gained carrier Sep 6 00:20:34.205030 systemd-networkd[1025]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:20:34.204000 audit[1023]: AVC avc: denied { confidentiality } for pid=1023 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:20:34.204000 audit[1023]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560d873c5170 a1=338ec a2=7f62a1d78bc5 a3=5 items=110 ppid=1018 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:34.204000 audit: CWD cwd="/" Sep 6 00:20:34.204000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=1 name=(null) inode=11952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=2 name=(null) inode=11952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=3 name=(null) inode=11953 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=4 name=(null) inode=11952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=5 name=(null) inode=11954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=6 name=(null) inode=11952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=7 name=(null) inode=11955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=8 name=(null) inode=11955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=9 name=(null) inode=11956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=10 name=(null) inode=11955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=11 name=(null) inode=11957 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=12 name=(null) inode=11955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=13 name=(null) inode=11958 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=14 name=(null) inode=11955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=15 name=(null) inode=11959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=16 name=(null) inode=11955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=17 name=(null) inode=11960 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=18 name=(null) inode=11952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=19 name=(null) inode=11961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=20 name=(null) inode=11961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=21 name=(null) inode=11962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=22 name=(null) inode=11961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=23 name=(null) inode=11963 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=24 name=(null) inode=11961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=25 name=(null) inode=11964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=26 name=(null) inode=11961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=27 name=(null) inode=11965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=28 name=(null) inode=11961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=29 name=(null) inode=11966 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=30 name=(null) inode=11952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=31 name=(null) inode=11967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=32 name=(null) inode=11967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=33 name=(null) inode=11968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=34 name=(null) inode=11967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=35 name=(null) inode=11969 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=36 name=(null) inode=11967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=37 name=(null) inode=11970 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=38 name=(null) inode=11967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=39 name=(null) inode=11971 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=40 name=(null) inode=11967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=41 name=(null) inode=11972 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=42 name=(null) inode=11952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=43 name=(null) inode=11973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=44 name=(null) inode=11973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=45 name=(null) inode=11974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=46 name=(null) inode=11973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=47 name=(null) inode=11975 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=48 name=(null) inode=11973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=49 name=(null) inode=11976 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=50 name=(null) inode=11973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=51 name=(null) inode=11977 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=52 name=(null) inode=11973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=53 name=(null) inode=11978 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=55 name=(null) inode=11979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=56 name=(null) inode=11979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=57 name=(null) inode=11980 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=58 name=(null) inode=11979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=59 name=(null) inode=11981 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=60 name=(null) inode=11979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=61 name=(null) inode=11982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=62 name=(null) inode=11982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=63 name=(null) inode=11983 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=64 name=(null) inode=11982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=65 name=(null) inode=11984 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=66 name=(null) inode=11982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=67 name=(null) inode=11985 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=68 name=(null) inode=11982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=69 name=(null) inode=11986 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=70 name=(null) inode=11982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=71 name=(null) inode=11987 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=72 name=(null) inode=11979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=73 name=(null) inode=11988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=74 name=(null) inode=11988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=75 name=(null) inode=11989 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=76 name=(null) inode=11988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=77 name=(null) inode=11990 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=78 name=(null) inode=11988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=79 name=(null) inode=11991 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=80 name=(null) inode=11988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=81 name=(null) inode=11992 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=82 name=(null) inode=11988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=83 name=(null) inode=11993 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=84 name=(null) inode=11979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=85 name=(null) inode=11994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=86 name=(null) inode=11994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=87 name=(null) inode=11995 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=88 name=(null) inode=11994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=89 name=(null) inode=11996 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=90 name=(null) inode=11994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=91 name=(null) inode=11997 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=92 name=(null) inode=11994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=93 name=(null) inode=11998 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=94 name=(null) inode=11994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=95 name=(null) inode=11999 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=96 name=(null) inode=11979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=97 name=(null) inode=12000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=98 name=(null) inode=12000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=99 name=(null) inode=12001 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=100 name=(null) inode=12000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=101 name=(null) inode=12002 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=102 name=(null) inode=12000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=103 name=(null) inode=12003 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=104 name=(null) inode=12000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=105 name=(null) inode=12004 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=106 name=(null) inode=12000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=107 name=(null) inode=12005 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.238924 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 6 00:20:34.204000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PATH item=109 name=(null) inode=10984 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:20:34.204000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:20:34.243940 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:20:34.246943 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 6 00:20:34.247133 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 6 00:20:34.247256 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 6 00:20:34.299499 kernel: kvm: Nested Virtualization enabled Sep 6 00:20:34.299576 kernel: SVM: kvm: Nested Paging enabled Sep 6 00:20:34.299606 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 6 00:20:34.300150 kernel: SVM: Virtual GIF supported Sep 6 00:20:34.316949 kernel: EDAC MC: Ver: 3.0.0 Sep 6 00:20:34.340335 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:20:34.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.342575 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:20:34.349884 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:20:34.379703 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:20:34.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.380730 systemd[1]: Reached target cryptsetup.target. Sep 6 00:20:34.382519 systemd[1]: Starting lvm2-activation.service... Sep 6 00:20:34.386068 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:20:34.411661 systemd[1]: Finished lvm2-activation.service. Sep 6 00:20:34.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.412635 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:20:34.413521 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:20:34.413545 systemd[1]: Reached target local-fs.target. Sep 6 00:20:34.414383 systemd[1]: Reached target machines.target. Sep 6 00:20:34.416177 systemd[1]: Starting ldconfig.service... Sep 6 00:20:34.417278 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:34.417345 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:34.418536 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:20:34.420222 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:20:34.422221 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:20:34.424104 systemd[1]: Starting systemd-sysext.service... Sep 6 00:20:34.425293 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1056 (bootctl) Sep 6 00:20:34.426243 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:20:34.436145 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:20:34.440142 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:20:34.440337 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:20:34.444702 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:20:34.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.452936 kernel: loop0: detected capacity change from 0 to 229808 Sep 6 00:20:34.465413 systemd-fsck[1064]: fsck.fat 4.2 (2021-01-31) Sep 6 00:20:34.465413 systemd-fsck[1064]: /dev/vda1: 790 files, 120761/258078 clusters Sep 6 00:20:34.466679 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:20:34.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.469570 systemd[1]: Mounting boot.mount... Sep 6 00:20:34.482079 systemd[1]: Mounted boot.mount. Sep 6 00:20:34.686428 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:20:34.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.687024 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:20:34.689400 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:20:34.692098 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:20:34.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.707932 kernel: loop1: detected capacity change from 0 to 229808 Sep 6 00:20:34.712254 (sd-sysext)[1070]: Using extensions 'kubernetes'. Sep 6 00:20:34.713019 (sd-sysext)[1070]: Merged extensions into '/usr'. Sep 6 00:20:34.728464 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:34.729903 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:20:34.731124 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:34.732463 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:34.734580 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:34.736959 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:34.737882 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:34.738052 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:34.738190 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:34.742112 ldconfig[1055]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:20:34.741162 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:20:34.742419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:34.742577 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:34.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.744034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:34.744178 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:34.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.745790 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:34.745945 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:34.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.747611 systemd[1]: Finished ldconfig.service. Sep 6 00:20:34.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.748862 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:34.749012 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:34.749901 systemd[1]: Finished systemd-sysext.service. Sep 6 00:20:34.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.752175 systemd[1]: Starting ensure-sysext.service... Sep 6 00:20:34.754252 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:20:34.759267 systemd[1]: Reloading. Sep 6 00:20:34.770189 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:20:34.772739 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:20:34.776194 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:20:34.814188 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-09-06T00:20:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:20:34.814220 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-09-06T00:20:34Z" level=info msg="torcx already run" Sep 6 00:20:34.888582 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:20:34.888600 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:20:34.905881 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:20:34.960000 audit: BPF prog-id=24 op=LOAD Sep 6 00:20:34.960000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:20:34.963000 audit: BPF prog-id=25 op=LOAD Sep 6 00:20:34.964000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:20:34.964000 audit: BPF prog-id=26 op=LOAD Sep 6 00:20:34.964000 audit: BPF prog-id=27 op=LOAD Sep 6 00:20:34.964000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:20:34.964000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:20:34.965000 audit: BPF prog-id=28 op=LOAD Sep 6 00:20:34.965000 audit: BPF prog-id=29 op=LOAD Sep 6 00:20:34.965000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:20:34.966000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:20:34.967000 audit: BPF prog-id=30 op=LOAD Sep 6 00:20:34.967000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:20:34.967000 audit: BPF prog-id=31 op=LOAD Sep 6 00:20:34.967000 audit: BPF prog-id=32 op=LOAD Sep 6 00:20:34.967000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:20:34.967000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:20:34.971609 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:20:34.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.977463 systemd[1]: Starting audit-rules.service... Sep 6 00:20:34.979823 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:20:34.982169 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:20:34.983000 audit: BPF prog-id=33 op=LOAD Sep 6 00:20:34.985413 systemd[1]: Starting systemd-resolved.service... Sep 6 00:20:34.986000 audit: BPF prog-id=34 op=LOAD Sep 6 00:20:34.988358 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:20:34.990846 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:20:34.992341 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:20:34.996009 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:34.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:34.996000 audit[1149]: SYSTEM_BOOT pid=1149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:20:35.001525 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:35.001776 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:35.003114 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:35.005262 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:35.007682 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:35.008705 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:35.008926 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:35.009093 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:35.009226 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:35.011323 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:20:35.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:20:35.013000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:20:35.013000 audit[1161]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff4acb3140 a2=420 a3=0 items=0 ppid=1138 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:20:35.013000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:20:35.013341 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:35.014481 augenrules[1161]: No rules Sep 6 00:20:35.013508 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:35.015394 systemd[1]: Finished audit-rules.service. Sep 6 00:20:35.016844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:35.016964 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:35.018477 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:35.018582 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:35.020035 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:35.020181 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:35.021416 systemd[1]: Starting systemd-update-done.service... Sep 6 00:20:35.022841 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:20:35.025759 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:35.026446 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:35.027579 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:35.029465 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:35.031449 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:35.032279 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:35.032378 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:35.032459 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:35.032520 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:35.033407 systemd[1]: Finished systemd-update-done.service. Sep 6 00:20:35.034965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:35.035107 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:35.036513 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:35.036632 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:35.038125 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:35.038298 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:35.039633 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:35.039723 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:35.042011 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:35.042222 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:20:35.043973 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:20:35.045890 systemd[1]: Starting modprobe@drm.service... Sep 6 00:20:35.048158 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:20:35.049945 systemd[1]: Starting modprobe@loop.service... Sep 6 00:20:35.050789 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:20:35.050886 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:35.051989 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:20:35.053167 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:20:35.053258 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:20:35.053802 systemd-resolved[1143]: Positive Trust Anchors: Sep 6 00:20:35.053811 systemd-resolved[1143]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:20:35.053838 systemd-resolved[1143]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:20:35.054355 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:20:35.054478 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:20:35.055883 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:20:35.056160 systemd[1]: Finished modprobe@drm.service. Sep 6 00:20:35.057797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:20:35.058026 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:20:35.059729 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:20:35.061232 systemd-timesyncd[1148]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 6 00:20:35.061373 systemd-timesyncd[1148]: Initial clock synchronization to Sat 2025-09-06 00:20:35.433036 UTC. Sep 6 00:20:35.061636 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:20:35.061737 systemd[1]: Finished modprobe@loop.service. Sep 6 00:20:35.063327 systemd[1]: Reached target time-set.target. Sep 6 00:20:35.063959 systemd-resolved[1143]: Defaulting to hostname 'linux'. Sep 6 00:20:35.064380 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:20:35.064417 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:20:35.064832 systemd[1]: Finished ensure-sysext.service. Sep 6 00:20:35.066720 systemd[1]: Started systemd-resolved.service. Sep 6 00:20:35.067859 systemd[1]: Reached target network.target. Sep 6 00:20:35.068960 systemd[1]: Reached target nss-lookup.target. Sep 6 00:20:35.069839 systemd[1]: Reached target sysinit.target. Sep 6 00:20:35.070860 systemd[1]: Started motdgen.path. Sep 6 00:20:35.071971 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:20:35.073318 systemd[1]: Started logrotate.timer. Sep 6 00:20:35.074179 systemd[1]: Started mdadm.timer. Sep 6 00:20:35.074932 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:20:35.075850 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:20:35.075885 systemd[1]: Reached target paths.target. Sep 6 00:20:35.076824 systemd[1]: Reached target timers.target. Sep 6 00:20:35.077971 systemd[1]: Listening on dbus.socket. Sep 6 00:20:35.079890 systemd[1]: Starting docker.socket... Sep 6 00:20:35.083313 systemd[1]: Listening on sshd.socket. Sep 6 00:20:35.084381 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:35.084713 systemd[1]: Listening on docker.socket. Sep 6 00:20:35.085626 systemd[1]: Reached target sockets.target. Sep 6 00:20:35.086523 systemd[1]: Reached target basic.target. Sep 6 00:20:35.087423 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:20:35.087445 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:20:35.088734 systemd[1]: Starting containerd.service... Sep 6 00:20:35.090664 systemd[1]: Starting dbus.service... Sep 6 00:20:35.092730 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:20:35.094981 systemd[1]: Starting extend-filesystems.service... Sep 6 00:20:35.096182 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:20:35.097463 systemd[1]: Starting motdgen.service... Sep 6 00:20:35.098476 jq[1180]: false Sep 6 00:20:35.099357 systemd[1]: Starting prepare-helm.service... Sep 6 00:20:35.101113 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:20:35.103208 systemd[1]: Starting sshd-keygen.service... Sep 6 00:20:35.107443 systemd[1]: Starting systemd-logind.service... Sep 6 00:20:35.108553 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:20:35.108629 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:20:35.109143 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:20:35.112197 systemd[1]: Starting update-engine.service... Sep 6 00:20:35.113950 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:20:35.116497 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:20:35.116706 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:20:35.118231 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:20:35.118655 jq[1198]: true Sep 6 00:20:35.118384 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:20:35.125434 jq[1202]: true Sep 6 00:20:35.127164 extend-filesystems[1181]: Found loop1 Sep 6 00:20:35.127164 extend-filesystems[1181]: Found sr0 Sep 6 00:20:35.129157 extend-filesystems[1181]: Found vda Sep 6 00:20:35.129157 extend-filesystems[1181]: Found vda1 Sep 6 00:20:35.129157 extend-filesystems[1181]: Found vda2 Sep 6 00:20:35.129157 extend-filesystems[1181]: Found vda3 Sep 6 00:20:35.129157 extend-filesystems[1181]: Found usr Sep 6 00:20:35.129157 extend-filesystems[1181]: Found vda4 Sep 6 00:20:35.129157 extend-filesystems[1181]: Found vda6 Sep 6 00:20:35.129157 extend-filesystems[1181]: Found vda7 Sep 6 00:20:35.129157 extend-filesystems[1181]: Found vda9 Sep 6 00:20:35.129157 extend-filesystems[1181]: Checking size of /dev/vda9 Sep 6 00:20:35.138369 tar[1200]: linux-amd64/LICENSE Sep 6 00:20:35.138369 tar[1200]: linux-amd64/helm Sep 6 00:20:35.132707 systemd[1]: Started dbus.service. Sep 6 00:20:35.132436 dbus-daemon[1179]: [system] SELinux support is enabled Sep 6 00:20:35.139876 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:20:35.140130 systemd[1]: Finished motdgen.service. Sep 6 00:20:35.141539 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:20:35.141584 systemd[1]: Reached target system-config.target. Sep 6 00:20:35.143305 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:20:35.143333 systemd[1]: Reached target user-config.target. Sep 6 00:20:35.156157 extend-filesystems[1181]: Resized partition /dev/vda9 Sep 6 00:20:35.162167 extend-filesystems[1219]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:20:35.166935 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 6 00:20:35.178690 update_engine[1195]: I0906 00:20:35.178440 1195 main.cc:92] Flatcar Update Engine starting Sep 6 00:20:35.185195 update_engine[1195]: I0906 00:20:35.180997 1195 update_check_scheduler.cc:74] Next update check in 7m57s Sep 6 00:20:35.180934 systemd[1]: Started update-engine.service. Sep 6 00:20:35.184079 systemd[1]: Started locksmithd.service. Sep 6 00:20:35.195949 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 6 00:20:35.221626 systemd-logind[1191]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:20:35.221659 systemd-logind[1191]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:20:35.223023 extend-filesystems[1219]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:20:35.223023 extend-filesystems[1219]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:20:35.223023 extend-filesystems[1219]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 6 00:20:35.237479 extend-filesystems[1181]: Resized filesystem in /dev/vda9 Sep 6 00:20:35.238610 bash[1230]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:20:35.224235 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:20:35.238780 env[1203]: time="2025-09-06T00:20:35.224689855Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:20:35.224499 systemd-logind[1191]: New seat seat0. Sep 6 00:20:35.225992 systemd[1]: Finished extend-filesystems.service. Sep 6 00:20:35.231808 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:20:35.234285 systemd[1]: Started systemd-logind.service. Sep 6 00:20:35.256554 env[1203]: time="2025-09-06T00:20:35.256489541Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:20:35.256716 env[1203]: time="2025-09-06T00:20:35.256688654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:35.258653 env[1203]: time="2025-09-06T00:20:35.258612091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:20:35.258653 env[1203]: time="2025-09-06T00:20:35.258646446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:35.258999 env[1203]: time="2025-09-06T00:20:35.258974120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:20:35.258999 env[1203]: time="2025-09-06T00:20:35.258995631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:35.259079 env[1203]: time="2025-09-06T00:20:35.259007994Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:20:35.259079 env[1203]: time="2025-09-06T00:20:35.259017341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:35.259125 env[1203]: time="2025-09-06T00:20:35.259083395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:35.259332 env[1203]: time="2025-09-06T00:20:35.259288440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:20:35.259428 env[1203]: time="2025-09-06T00:20:35.259407854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:20:35.259428 env[1203]: time="2025-09-06T00:20:35.259424896Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:20:35.259478 env[1203]: time="2025-09-06T00:20:35.259467455Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:20:35.259502 env[1203]: time="2025-09-06T00:20:35.259478506Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.266577670Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.266618967Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.266632172Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.266686183Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.266706972Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.266724455Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.266765512Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.266802141Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.266839962Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.266874957Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.266930101Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.266965387Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.267231316Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:20:35.268930 env[1203]: time="2025-09-06T00:20:35.267452821Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:20:35.268237 locksmithd[1232]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268422469Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268450141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268468816Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268521275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268533347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268544679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268554567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268565147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268575576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268589743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268599892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268612636Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268745345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268769530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:20:35.269819 env[1203]: time="2025-09-06T00:20:35.268780851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:20:35.270136 env[1203]: time="2025-09-06T00:20:35.268795619Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:20:35.270136 env[1203]: time="2025-09-06T00:20:35.268809675Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:20:35.270136 env[1203]: time="2025-09-06T00:20:35.268819754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:20:35.270136 env[1203]: time="2025-09-06T00:20:35.268839141Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:20:35.270136 env[1203]: time="2025-09-06T00:20:35.268882452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:20:35.270501 env[1203]: time="2025-09-06T00:20:35.270443409Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:20:35.272358 env[1203]: time="2025-09-06T00:20:35.270640509Z" level=info msg="Connect containerd service" Sep 6 00:20:35.272358 env[1203]: time="2025-09-06T00:20:35.272003936Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:20:35.272975 env[1203]: time="2025-09-06T00:20:35.272944259Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:20:35.273175 env[1203]: time="2025-09-06T00:20:35.273098469Z" level=info msg="Start subscribing containerd event" Sep 6 00:20:35.273175 env[1203]: time="2025-09-06T00:20:35.273172588Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:20:35.273267 env[1203]: time="2025-09-06T00:20:35.273204297Z" level=info msg="Start recovering state" Sep 6 00:20:35.273392 env[1203]: time="2025-09-06T00:20:35.273209517Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:20:35.273616 env[1203]: time="2025-09-06T00:20:35.273562098Z" level=info msg="containerd successfully booted in 0.081920s" Sep 6 00:20:35.273718 env[1203]: time="2025-09-06T00:20:35.273650514Z" level=info msg="Start event monitor" Sep 6 00:20:35.273718 env[1203]: time="2025-09-06T00:20:35.273700608Z" level=info msg="Start snapshots syncer" Sep 6 00:20:35.273718 env[1203]: time="2025-09-06T00:20:35.273716848Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:20:35.273681 systemd[1]: Started containerd.service. Sep 6 00:20:35.273998 env[1203]: time="2025-09-06T00:20:35.273730013Z" level=info msg="Start streaming server" Sep 6 00:20:35.715766 sshd_keygen[1201]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:20:35.746098 systemd[1]: Finished sshd-keygen.service. Sep 6 00:20:35.749894 systemd[1]: Starting issuegen.service... Sep 6 00:20:35.755629 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:20:35.755824 systemd[1]: Finished issuegen.service. Sep 6 00:20:35.758156 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:20:35.796390 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:20:35.798948 systemd[1]: Started getty@tty1.service. Sep 6 00:20:35.801503 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:20:35.802619 systemd[1]: Reached target getty.target. Sep 6 00:20:35.820199 tar[1200]: linux-amd64/README.md Sep 6 00:20:35.824049 systemd[1]: Finished prepare-helm.service. Sep 6 00:20:36.169212 systemd-networkd[1025]: eth0: Gained IPv6LL Sep 6 00:20:36.171512 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:20:36.172810 systemd[1]: Reached target network-online.target. Sep 6 00:20:36.175403 systemd[1]: Starting kubelet.service... Sep 6 00:20:37.404328 systemd[1]: Started kubelet.service. Sep 6 00:20:37.405704 systemd[1]: Reached target multi-user.target. Sep 6 00:20:37.411110 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:20:37.416540 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:20:37.416693 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:20:37.417829 systemd[1]: Startup finished in 849ms (kernel) + 4.976s (initrd) + 6.501s (userspace) = 12.327s. Sep 6 00:20:37.845082 kubelet[1263]: E0906 00:20:37.844904 1263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:20:37.847358 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:20:37.847502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:20:37.847770 systemd[1]: kubelet.service: Consumed 1.504s CPU time. Sep 6 00:20:39.449687 systemd[1]: Created slice system-sshd.slice. Sep 6 00:20:39.451287 systemd[1]: Started sshd@0-10.0.0.108:22-10.0.0.1:47510.service. Sep 6 00:20:39.495933 sshd[1273]: Accepted publickey for core from 10.0.0.1 port 47510 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:20:39.497798 sshd[1273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:39.507293 systemd[1]: Created slice user-500.slice. Sep 6 00:20:39.508602 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:20:39.510766 systemd-logind[1191]: New session 1 of user core. Sep 6 00:20:39.517139 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:20:39.518688 systemd[1]: Starting user@500.service... Sep 6 00:20:39.521523 (systemd)[1276]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:39.592908 systemd[1276]: Queued start job for default target default.target. Sep 6 00:20:39.593432 systemd[1276]: Reached target paths.target. Sep 6 00:20:39.593452 systemd[1276]: Reached target sockets.target. Sep 6 00:20:39.593463 systemd[1276]: Reached target timers.target. Sep 6 00:20:39.593474 systemd[1276]: Reached target basic.target. Sep 6 00:20:39.593511 systemd[1276]: Reached target default.target. Sep 6 00:20:39.593533 systemd[1276]: Startup finished in 66ms. Sep 6 00:20:39.593598 systemd[1]: Started user@500.service. Sep 6 00:20:39.594625 systemd[1]: Started session-1.scope. Sep 6 00:20:39.648274 systemd[1]: Started sshd@1-10.0.0.108:22-10.0.0.1:47512.service. Sep 6 00:20:39.686767 sshd[1285]: Accepted publickey for core from 10.0.0.1 port 47512 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:20:39.688347 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:39.692585 systemd-logind[1191]: New session 2 of user core. Sep 6 00:20:39.693704 systemd[1]: Started session-2.scope. Sep 6 00:20:39.751277 sshd[1285]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:39.754043 systemd[1]: sshd@1-10.0.0.108:22-10.0.0.1:47512.service: Deactivated successfully. Sep 6 00:20:39.754572 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:20:39.755096 systemd-logind[1191]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:20:39.756035 systemd[1]: Started sshd@2-10.0.0.108:22-10.0.0.1:53890.service. Sep 6 00:20:39.756805 systemd-logind[1191]: Removed session 2. Sep 6 00:20:39.789527 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 53890 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:20:39.790535 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:39.794082 systemd-logind[1191]: New session 3 of user core. Sep 6 00:20:39.795197 systemd[1]: Started session-3.scope. Sep 6 00:20:39.846536 sshd[1291]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:39.849953 systemd[1]: Started sshd@3-10.0.0.108:22-10.0.0.1:53898.service. Sep 6 00:20:39.850436 systemd[1]: sshd@2-10.0.0.108:22-10.0.0.1:53890.service: Deactivated successfully. Sep 6 00:20:39.850931 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:20:39.851466 systemd-logind[1191]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:20:39.852394 systemd-logind[1191]: Removed session 3. Sep 6 00:20:39.883806 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 53898 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:20:39.884898 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:39.888015 systemd-logind[1191]: New session 4 of user core. Sep 6 00:20:39.888772 systemd[1]: Started session-4.scope. Sep 6 00:20:39.943319 sshd[1296]: pam_unix(sshd:session): session closed for user core Sep 6 00:20:39.946858 systemd[1]: sshd@3-10.0.0.108:22-10.0.0.1:53898.service: Deactivated successfully. Sep 6 00:20:39.947529 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:20:39.948154 systemd-logind[1191]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:20:39.949429 systemd[1]: Started sshd@4-10.0.0.108:22-10.0.0.1:53908.service. Sep 6 00:20:39.950412 systemd-logind[1191]: Removed session 4. Sep 6 00:20:39.983616 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 53908 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:20:39.984812 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:20:39.988142 systemd-logind[1191]: New session 5 of user core. Sep 6 00:20:39.988887 systemd[1]: Started session-5.scope. Sep 6 00:20:40.047687 sudo[1306]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:20:40.047888 sudo[1306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:20:40.070493 systemd[1]: Starting docker.service... Sep 6 00:20:40.108363 env[1318]: time="2025-09-06T00:20:40.108290071Z" level=info msg="Starting up" Sep 6 00:20:40.109673 env[1318]: time="2025-09-06T00:20:40.109620341Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:20:40.109673 env[1318]: time="2025-09-06T00:20:40.109645052Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:20:40.109673 env[1318]: time="2025-09-06T00:20:40.109664390Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:20:40.109673 env[1318]: time="2025-09-06T00:20:40.109673961Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:20:40.111485 env[1318]: time="2025-09-06T00:20:40.111441665Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:20:40.111485 env[1318]: time="2025-09-06T00:20:40.111468320Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:20:40.111580 env[1318]: time="2025-09-06T00:20:40.111490190Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:20:40.111580 env[1318]: time="2025-09-06T00:20:40.111504167Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:20:40.988704 env[1318]: time="2025-09-06T00:20:40.988640610Z" level=info msg="Loading containers: start." Sep 6 00:20:41.104962 kernel: Initializing XFRM netlink socket Sep 6 00:20:41.132796 env[1318]: time="2025-09-06T00:20:41.132752731Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:20:41.187351 systemd-networkd[1025]: docker0: Link UP Sep 6 00:20:41.204132 env[1318]: time="2025-09-06T00:20:41.204080353Z" level=info msg="Loading containers: done." Sep 6 00:20:41.214168 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2011969177-merged.mount: Deactivated successfully. Sep 6 00:20:41.215791 env[1318]: time="2025-09-06T00:20:41.215744112Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:20:41.215991 env[1318]: time="2025-09-06T00:20:41.215975998Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:20:41.216095 env[1318]: time="2025-09-06T00:20:41.216082206Z" level=info msg="Daemon has completed initialization" Sep 6 00:20:41.234237 systemd[1]: Started docker.service. Sep 6 00:20:41.241057 env[1318]: time="2025-09-06T00:20:41.240910075Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:20:42.111811 env[1203]: time="2025-09-06T00:20:42.111741882Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 6 00:20:42.753981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1571252345.mount: Deactivated successfully. Sep 6 00:20:44.247593 env[1203]: time="2025-09-06T00:20:44.247518613Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:44.249374 env[1203]: time="2025-09-06T00:20:44.249332013Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:44.251216 env[1203]: time="2025-09-06T00:20:44.251158391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:44.252539 env[1203]: time="2025-09-06T00:20:44.252501754Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:44.253182 env[1203]: time="2025-09-06T00:20:44.253150746Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 6 00:20:44.253729 env[1203]: time="2025-09-06T00:20:44.253702429Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 6 00:20:47.195896 env[1203]: time="2025-09-06T00:20:47.195772916Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:47.198048 env[1203]: time="2025-09-06T00:20:47.198025305Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:47.200556 env[1203]: time="2025-09-06T00:20:47.200511837Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:47.203524 env[1203]: time="2025-09-06T00:20:47.203486119Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:47.203869 env[1203]: time="2025-09-06T00:20:47.203838373Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 6 00:20:47.204488 env[1203]: time="2025-09-06T00:20:47.204462338Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 6 00:20:48.099276 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:20:48.099677 systemd[1]: Stopped kubelet.service. Sep 6 00:20:48.099769 systemd[1]: kubelet.service: Consumed 1.504s CPU time. Sep 6 00:20:48.102612 systemd[1]: Starting kubelet.service... Sep 6 00:20:48.249957 systemd[1]: Started kubelet.service. Sep 6 00:20:48.550633 kubelet[1453]: E0906 00:20:48.550483 1453 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:20:48.553738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:20:48.553882 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:20:50.136230 env[1203]: time="2025-09-06T00:20:50.136153428Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:50.138293 env[1203]: time="2025-09-06T00:20:50.138243579Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:50.140354 env[1203]: time="2025-09-06T00:20:50.140291429Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:50.143088 env[1203]: time="2025-09-06T00:20:50.143034281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:50.144107 env[1203]: time="2025-09-06T00:20:50.144041758Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 6 00:20:50.144718 env[1203]: time="2025-09-06T00:20:50.144627852Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 6 00:20:52.197203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4137995538.mount: Deactivated successfully. Sep 6 00:20:53.210666 env[1203]: time="2025-09-06T00:20:53.210600930Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:53.212534 env[1203]: time="2025-09-06T00:20:53.212509404Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:53.214418 env[1203]: time="2025-09-06T00:20:53.214367363Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:53.215866 env[1203]: time="2025-09-06T00:20:53.215814028Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:53.216152 env[1203]: time="2025-09-06T00:20:53.216116660Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 6 00:20:53.216719 env[1203]: time="2025-09-06T00:20:53.216679159Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 6 00:20:53.795023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472051216.mount: Deactivated successfully. Sep 6 00:20:54.988220 env[1203]: time="2025-09-06T00:20:54.988130987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:54.989886 env[1203]: time="2025-09-06T00:20:54.989861622Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:54.991907 env[1203]: time="2025-09-06T00:20:54.991842371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:54.993582 env[1203]: time="2025-09-06T00:20:54.993550359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:54.994385 env[1203]: time="2025-09-06T00:20:54.994353993Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 6 00:20:54.995039 env[1203]: time="2025-09-06T00:20:54.995012549Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:20:55.563648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1543541493.mount: Deactivated successfully. Sep 6 00:20:55.568729 env[1203]: time="2025-09-06T00:20:55.568676322Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:55.570563 env[1203]: time="2025-09-06T00:20:55.570533818Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:55.572152 env[1203]: time="2025-09-06T00:20:55.572107373Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:55.573444 env[1203]: time="2025-09-06T00:20:55.573414585Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:55.573891 env[1203]: time="2025-09-06T00:20:55.573861165Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 00:20:55.574364 env[1203]: time="2025-09-06T00:20:55.574332443Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 6 00:20:56.093529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864858682.mount: Deactivated successfully. Sep 6 00:20:58.769295 env[1203]: time="2025-09-06T00:20:58.769219013Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:58.771289 env[1203]: time="2025-09-06T00:20:58.771232869Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:58.773226 env[1203]: time="2025-09-06T00:20:58.773199541Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:58.774957 env[1203]: time="2025-09-06T00:20:58.774925772Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:20:58.775629 env[1203]: time="2025-09-06T00:20:58.775590409Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 6 00:20:58.804799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:20:58.805055 systemd[1]: Stopped kubelet.service. Sep 6 00:20:58.806733 systemd[1]: Starting kubelet.service... Sep 6 00:20:58.901836 systemd[1]: Started kubelet.service. Sep 6 00:20:58.938300 kubelet[1470]: E0906 00:20:58.938247 1470 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:20:58.940797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:20:58.940981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:21:01.700500 systemd[1]: Stopped kubelet.service. Sep 6 00:21:01.702703 systemd[1]: Starting kubelet.service... Sep 6 00:21:01.727313 systemd[1]: Reloading. Sep 6 00:21:01.808503 /usr/lib/systemd/system-generators/torcx-generator[1522]: time="2025-09-06T00:21:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:21:01.808542 /usr/lib/systemd/system-generators/torcx-generator[1522]: time="2025-09-06T00:21:01Z" level=info msg="torcx already run" Sep 6 00:21:02.289861 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:21:02.289881 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:21:02.307184 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:21:02.384747 systemd[1]: Started kubelet.service. Sep 6 00:21:02.386288 systemd[1]: Stopping kubelet.service... Sep 6 00:21:02.386515 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:21:02.386668 systemd[1]: Stopped kubelet.service. Sep 6 00:21:02.388066 systemd[1]: Starting kubelet.service... Sep 6 00:21:02.480965 systemd[1]: Started kubelet.service. Sep 6 00:21:02.511668 kubelet[1568]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:21:02.511668 kubelet[1568]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:21:02.511668 kubelet[1568]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:21:02.512121 kubelet[1568]: I0906 00:21:02.511679 1568 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:21:02.848490 kubelet[1568]: I0906 00:21:02.848429 1568 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 00:21:02.848490 kubelet[1568]: I0906 00:21:02.848462 1568 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:21:02.848703 kubelet[1568]: I0906 00:21:02.848687 1568 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 00:21:02.878032 kubelet[1568]: I0906 00:21:02.878000 1568 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:21:02.878962 kubelet[1568]: E0906 00:21:02.878939 1568 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 6 00:21:02.884659 kubelet[1568]: E0906 00:21:02.884626 1568 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:21:02.884659 kubelet[1568]: I0906 00:21:02.884659 1568 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:21:02.888750 kubelet[1568]: I0906 00:21:02.888731 1568 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:21:02.888950 kubelet[1568]: I0906 00:21:02.888924 1568 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:21:02.889102 kubelet[1568]: I0906 00:21:02.888946 1568 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:21:02.889102 kubelet[1568]: I0906 00:21:02.889098 1568 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:21:02.889223 kubelet[1568]: I0906 00:21:02.889106 1568 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 00:21:02.889827 kubelet[1568]: I0906 00:21:02.889808 1568 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:21:02.895634 kubelet[1568]: I0906 00:21:02.895604 1568 kubelet.go:480] "Attempting to sync node with API server" Sep 6 00:21:02.895634 kubelet[1568]: I0906 00:21:02.895625 1568 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:21:02.895634 kubelet[1568]: I0906 00:21:02.895648 1568 kubelet.go:386] "Adding apiserver pod source" Sep 6 00:21:02.895839 kubelet[1568]: I0906 00:21:02.895662 1568 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:21:02.918643 kubelet[1568]: I0906 00:21:02.918618 1568 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:21:02.919121 kubelet[1568]: I0906 00:21:02.919099 1568 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 00:21:02.923963 kubelet[1568]: W0906 00:21:02.923935 1568 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:21:02.924843 kubelet[1568]: E0906 00:21:02.924802 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 00:21:02.926108 kubelet[1568]: I0906 00:21:02.926084 1568 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:21:02.926169 kubelet[1568]: I0906 00:21:02.926131 1568 server.go:1289] "Started kubelet" Sep 6 00:21:02.928512 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:21:02.933225 kubelet[1568]: I0906 00:21:02.933205 1568 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:21:02.936241 kubelet[1568]: E0906 00:21:02.936203 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 6 00:21:02.936393 kubelet[1568]: I0906 00:21:02.936347 1568 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:21:02.937391 kubelet[1568]: I0906 00:21:02.937303 1568 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:21:02.937650 kubelet[1568]: I0906 00:21:02.937634 1568 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:21:02.939696 kubelet[1568]: I0906 00:21:02.939651 1568 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:21:02.939846 kubelet[1568]: I0906 00:21:02.939827 1568 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:21:02.939977 kubelet[1568]: E0906 00:21:02.939953 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:02.940257 kubelet[1568]: I0906 00:21:02.940232 1568 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:21:02.940312 kubelet[1568]: I0906 00:21:02.940290 1568 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:21:02.940637 kubelet[1568]: E0906 00:21:02.940592 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 6 00:21:02.940823 kubelet[1568]: E0906 00:21:02.940774 1568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="200ms" Sep 6 00:21:02.940926 kubelet[1568]: E0906 00:21:02.940010 1568 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186289919121fc1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 00:21:02.926101534 +0000 UTC m=+0.442119880,LastTimestamp:2025-09-06 00:21:02.926101534 +0000 UTC m=+0.442119880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 00:21:02.941343 kubelet[1568]: I0906 00:21:02.941016 1568 factory.go:223] Registration of the systemd container factory successfully Sep 6 00:21:02.941343 kubelet[1568]: I0906 00:21:02.941088 1568 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:21:02.941895 kubelet[1568]: I0906 00:21:02.941873 1568 server.go:317] "Adding debug handlers to kubelet server" Sep 6 00:21:02.942253 kubelet[1568]: E0906 00:21:02.942224 1568 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:21:02.943988 kubelet[1568]: I0906 00:21:02.943959 1568 factory.go:223] Registration of the containerd container factory successfully Sep 6 00:21:02.954814 kubelet[1568]: I0906 00:21:02.954751 1568 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 00:21:02.956790 kubelet[1568]: I0906 00:21:02.956760 1568 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 00:21:02.956790 kubelet[1568]: I0906 00:21:02.956784 1568 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 00:21:02.956869 kubelet[1568]: I0906 00:21:02.956801 1568 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:21:02.956869 kubelet[1568]: I0906 00:21:02.956808 1568 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 00:21:02.956869 kubelet[1568]: E0906 00:21:02.956843 1568 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:21:02.957490 kubelet[1568]: E0906 00:21:02.957462 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 6 00:21:02.958696 kubelet[1568]: I0906 00:21:02.958675 1568 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:21:02.958696 kubelet[1568]: I0906 00:21:02.958689 1568 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:21:02.958696 kubelet[1568]: I0906 00:21:02.958702 1568 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:21:03.040877 kubelet[1568]: E0906 00:21:03.040816 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:03.057053 kubelet[1568]: E0906 00:21:03.056986 1568 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:21:03.141584 kubelet[1568]: E0906 00:21:03.141453 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:03.141932 kubelet[1568]: E0906 00:21:03.141882 1568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="400ms" Sep 6 00:21:03.241541 kubelet[1568]: E0906 00:21:03.241499 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:03.257812 kubelet[1568]: E0906 00:21:03.257775 1568 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:21:03.342366 kubelet[1568]: E0906 00:21:03.342319 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:03.423182 kubelet[1568]: I0906 00:21:03.423040 1568 policy_none.go:49] "None policy: Start" Sep 6 00:21:03.423182 kubelet[1568]: I0906 00:21:03.423078 1568 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:21:03.423182 kubelet[1568]: I0906 00:21:03.423090 1568 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:21:03.442432 kubelet[1568]: E0906 00:21:03.442387 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:03.456014 systemd[1]: Created slice kubepods.slice. Sep 6 00:21:03.460028 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:21:03.462449 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:21:03.472716 kubelet[1568]: E0906 00:21:03.472669 1568 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 00:21:03.472850 kubelet[1568]: I0906 00:21:03.472822 1568 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:21:03.472904 kubelet[1568]: I0906 00:21:03.472841 1568 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:21:03.473556 kubelet[1568]: I0906 00:21:03.473056 1568 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:21:03.474053 kubelet[1568]: E0906 00:21:03.473987 1568 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:21:03.474109 kubelet[1568]: E0906 00:21:03.474061 1568 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 6 00:21:03.543396 kubelet[1568]: E0906 00:21:03.543322 1568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="800ms" Sep 6 00:21:03.574708 kubelet[1568]: I0906 00:21:03.574659 1568 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:21:03.575010 kubelet[1568]: E0906 00:21:03.574965 1568 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 6 00:21:03.667641 systemd[1]: Created slice kubepods-burstable-podf582a2f89fc7cf3082cfb77328ca5243.slice. Sep 6 00:21:03.672515 kubelet[1568]: E0906 00:21:03.672483 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:21:03.674892 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 6 00:21:03.681787 kubelet[1568]: E0906 00:21:03.681762 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:21:03.683832 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 6 00:21:03.685136 kubelet[1568]: E0906 00:21:03.685115 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:21:03.743570 kubelet[1568]: I0906 00:21:03.743543 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:03.743570 kubelet[1568]: I0906 00:21:03.743567 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:21:03.743681 kubelet[1568]: I0906 00:21:03.743582 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f582a2f89fc7cf3082cfb77328ca5243-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f582a2f89fc7cf3082cfb77328ca5243\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:03.743681 kubelet[1568]: I0906 00:21:03.743596 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:03.743681 kubelet[1568]: I0906 00:21:03.743619 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f582a2f89fc7cf3082cfb77328ca5243-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f582a2f89fc7cf3082cfb77328ca5243\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:03.743681 kubelet[1568]: I0906 00:21:03.743632 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f582a2f89fc7cf3082cfb77328ca5243-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f582a2f89fc7cf3082cfb77328ca5243\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:03.743681 kubelet[1568]: I0906 00:21:03.743646 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:03.743813 kubelet[1568]: I0906 00:21:03.743660 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:03.743813 kubelet[1568]: I0906 00:21:03.743672 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:03.776569 kubelet[1568]: I0906 00:21:03.776522 1568 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:21:03.776987 kubelet[1568]: E0906 00:21:03.776953 1568 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 6 00:21:03.809509 kubelet[1568]: E0906 00:21:03.809472 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 6 00:21:03.974022 kubelet[1568]: E0906 00:21:03.973872 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:03.974646 env[1203]: time="2025-09-06T00:21:03.974599021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f582a2f89fc7cf3082cfb77328ca5243,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:03.982467 kubelet[1568]: E0906 00:21:03.982432 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:03.983058 env[1203]: time="2025-09-06T00:21:03.982998483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:03.986203 kubelet[1568]: E0906 00:21:03.986166 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:03.986505 env[1203]: time="2025-09-06T00:21:03.986461399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:04.011192 kubelet[1568]: E0906 00:21:04.011155 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 6 00:21:04.178216 kubelet[1568]: I0906 00:21:04.178172 1568 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:21:04.178477 kubelet[1568]: E0906 00:21:04.178454 1568 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 6 00:21:04.183867 kubelet[1568]: E0906 00:21:04.183844 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 00:21:04.302008 kubelet[1568]: E0906 00:21:04.301895 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 6 00:21:04.343994 kubelet[1568]: E0906 00:21:04.343932 1568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="1.6s" Sep 6 00:21:04.523458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858431513.mount: Deactivated successfully. Sep 6 00:21:04.527736 env[1203]: time="2025-09-06T00:21:04.527655228Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:04.533742 env[1203]: time="2025-09-06T00:21:04.533681342Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:04.535055 env[1203]: time="2025-09-06T00:21:04.535032535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:04.535617 env[1203]: time="2025-09-06T00:21:04.535586475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:04.538038 env[1203]: time="2025-09-06T00:21:04.538015889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:04.539070 env[1203]: time="2025-09-06T00:21:04.539016278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:04.540192 env[1203]: time="2025-09-06T00:21:04.540167315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:04.541361 env[1203]: time="2025-09-06T00:21:04.541330288Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:04.542721 env[1203]: time="2025-09-06T00:21:04.542687861Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:04.543786 env[1203]: time="2025-09-06T00:21:04.543756533Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:04.545156 env[1203]: time="2025-09-06T00:21:04.545121306Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:04.548194 env[1203]: time="2025-09-06T00:21:04.548141008Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:04.564017 env[1203]: time="2025-09-06T00:21:04.563877316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:04.564175 env[1203]: time="2025-09-06T00:21:04.563944736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:04.564311 env[1203]: time="2025-09-06T00:21:04.564256063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:04.564572 env[1203]: time="2025-09-06T00:21:04.564512477Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86a963779a0325383d11a24ee0d1efbceb08e0cf262e3cbc32672817030d87d7 pid=1615 runtime=io.containerd.runc.v2 Sep 6 00:21:04.579170 env[1203]: time="2025-09-06T00:21:04.579075029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:04.579316 env[1203]: time="2025-09-06T00:21:04.579173713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:04.579316 env[1203]: time="2025-09-06T00:21:04.579197223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:04.579368 env[1203]: time="2025-09-06T00:21:04.579341402Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7eddce6dbbc7701c9a9952f5166fd5781db787e53370935459f420e63be61b35 pid=1648 runtime=io.containerd.runc.v2 Sep 6 00:21:04.579517 env[1203]: time="2025-09-06T00:21:04.579465622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:04.579645 env[1203]: time="2025-09-06T00:21:04.579619962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:04.579753 env[1203]: time="2025-09-06T00:21:04.579727031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:04.580042 env[1203]: time="2025-09-06T00:21:04.580015680Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5956e43c2ffb72a723a3426ff3f47cc1e4f8f63f9d9d737dec22cca796346abe pid=1647 runtime=io.containerd.runc.v2 Sep 6 00:21:04.584443 systemd[1]: Started cri-containerd-86a963779a0325383d11a24ee0d1efbceb08e0cf262e3cbc32672817030d87d7.scope. Sep 6 00:21:04.595399 systemd[1]: Started cri-containerd-7eddce6dbbc7701c9a9952f5166fd5781db787e53370935459f420e63be61b35.scope. Sep 6 00:21:04.601565 systemd[1]: Started cri-containerd-5956e43c2ffb72a723a3426ff3f47cc1e4f8f63f9d9d737dec22cca796346abe.scope. Sep 6 00:21:04.620480 env[1203]: time="2025-09-06T00:21:04.620437533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"86a963779a0325383d11a24ee0d1efbceb08e0cf262e3cbc32672817030d87d7\"" Sep 6 00:21:04.621751 kubelet[1568]: E0906 00:21:04.621588 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:04.627581 env[1203]: time="2025-09-06T00:21:04.627554986Z" level=info msg="CreateContainer within sandbox \"86a963779a0325383d11a24ee0d1efbceb08e0cf262e3cbc32672817030d87d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:21:04.631017 env[1203]: time="2025-09-06T00:21:04.630980978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"7eddce6dbbc7701c9a9952f5166fd5781db787e53370935459f420e63be61b35\"" Sep 6 00:21:04.631634 kubelet[1568]: E0906 00:21:04.631511 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:04.636016 env[1203]: time="2025-09-06T00:21:04.635972430Z" level=info msg="CreateContainer within sandbox \"7eddce6dbbc7701c9a9952f5166fd5781db787e53370935459f420e63be61b35\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:21:04.644337 env[1203]: time="2025-09-06T00:21:04.644273479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f582a2f89fc7cf3082cfb77328ca5243,Namespace:kube-system,Attempt:0,} returns sandbox id \"5956e43c2ffb72a723a3426ff3f47cc1e4f8f63f9d9d737dec22cca796346abe\"" Sep 6 00:21:04.645117 kubelet[1568]: E0906 00:21:04.645082 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:04.645952 env[1203]: time="2025-09-06T00:21:04.645834517Z" level=info msg="CreateContainer within sandbox \"86a963779a0325383d11a24ee0d1efbceb08e0cf262e3cbc32672817030d87d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0d8f5d9a7e607162cdcfb0f6e65d96c5fc24629956d6fda46e6c89bddd1e2d32\"" Sep 6 00:21:04.647414 env[1203]: time="2025-09-06T00:21:04.647355125Z" level=info msg="StartContainer for \"0d8f5d9a7e607162cdcfb0f6e65d96c5fc24629956d6fda46e6c89bddd1e2d32\"" Sep 6 00:21:04.650060 env[1203]: time="2025-09-06T00:21:04.650038474Z" level=info msg="CreateContainer within sandbox \"5956e43c2ffb72a723a3426ff3f47cc1e4f8f63f9d9d737dec22cca796346abe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:21:04.660322 env[1203]: time="2025-09-06T00:21:04.660268386Z" level=info msg="CreateContainer within sandbox \"7eddce6dbbc7701c9a9952f5166fd5781db787e53370935459f420e63be61b35\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6b9243493b073e73007e972bb0fee235909c2fa71ef707e1bd40ecbaaa84ca97\"" Sep 6 00:21:04.660849 env[1203]: time="2025-09-06T00:21:04.660826457Z" level=info msg="StartContainer for \"6b9243493b073e73007e972bb0fee235909c2fa71ef707e1bd40ecbaaa84ca97\"" Sep 6 00:21:04.665849 systemd[1]: Started cri-containerd-0d8f5d9a7e607162cdcfb0f6e65d96c5fc24629956d6fda46e6c89bddd1e2d32.scope. Sep 6 00:21:04.667791 env[1203]: time="2025-09-06T00:21:04.667754497Z" level=info msg="CreateContainer within sandbox \"5956e43c2ffb72a723a3426ff3f47cc1e4f8f63f9d9d737dec22cca796346abe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3fc2b252245e249235eac43a13f87d5d1875eda7a78cd4ecab4055dca1879e6a\"" Sep 6 00:21:04.668270 env[1203]: time="2025-09-06T00:21:04.668236823Z" level=info msg="StartContainer for \"3fc2b252245e249235eac43a13f87d5d1875eda7a78cd4ecab4055dca1879e6a\"" Sep 6 00:21:04.680467 systemd[1]: Started cri-containerd-6b9243493b073e73007e972bb0fee235909c2fa71ef707e1bd40ecbaaa84ca97.scope. Sep 6 00:21:04.687547 systemd[1]: Started cri-containerd-3fc2b252245e249235eac43a13f87d5d1875eda7a78cd4ecab4055dca1879e6a.scope. Sep 6 00:21:04.713937 env[1203]: time="2025-09-06T00:21:04.712811452Z" level=info msg="StartContainer for \"0d8f5d9a7e607162cdcfb0f6e65d96c5fc24629956d6fda46e6c89bddd1e2d32\" returns successfully" Sep 6 00:21:04.727814 env[1203]: time="2025-09-06T00:21:04.727752943Z" level=info msg="StartContainer for \"6b9243493b073e73007e972bb0fee235909c2fa71ef707e1bd40ecbaaa84ca97\" returns successfully" Sep 6 00:21:04.731372 env[1203]: time="2025-09-06T00:21:04.731337837Z" level=info msg="StartContainer for \"3fc2b252245e249235eac43a13f87d5d1875eda7a78cd4ecab4055dca1879e6a\" returns successfully" Sep 6 00:21:04.969903 kubelet[1568]: E0906 00:21:04.969772 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:21:04.969903 kubelet[1568]: E0906 00:21:04.969881 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:04.971409 kubelet[1568]: E0906 00:21:04.971382 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:21:04.971486 kubelet[1568]: E0906 00:21:04.971460 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:04.972655 kubelet[1568]: E0906 00:21:04.972630 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:21:04.972731 kubelet[1568]: E0906 00:21:04.972707 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:04.979183 kubelet[1568]: I0906 00:21:04.979157 1568 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:21:05.974731 kubelet[1568]: E0906 00:21:05.974688 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:21:05.975233 kubelet[1568]: E0906 00:21:05.974799 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:05.975233 kubelet[1568]: E0906 00:21:05.975014 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:21:05.975233 kubelet[1568]: E0906 00:21:05.975090 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:05.980031 kubelet[1568]: E0906 00:21:05.979986 1568 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 6 00:21:06.044489 kubelet[1568]: I0906 00:21:06.044454 1568 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 6 00:21:06.044746 kubelet[1568]: E0906 00:21:06.044730 1568 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 6 00:21:06.053407 kubelet[1568]: E0906 00:21:06.053377 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:06.154106 kubelet[1568]: E0906 00:21:06.154043 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:06.254859 kubelet[1568]: E0906 00:21:06.254708 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:06.355473 kubelet[1568]: E0906 00:21:06.355405 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:06.455775 kubelet[1568]: E0906 00:21:06.455708 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:06.556712 kubelet[1568]: E0906 00:21:06.556576 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:06.657779 kubelet[1568]: E0906 00:21:06.657726 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:06.758187 kubelet[1568]: E0906 00:21:06.758144 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:21:06.919927 kubelet[1568]: I0906 00:21:06.919769 1568 apiserver.go:52] "Watching apiserver" Sep 6 00:21:06.941012 kubelet[1568]: I0906 00:21:06.940967 1568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:06.941196 kubelet[1568]: I0906 00:21:06.940993 1568 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:21:06.948613 kubelet[1568]: I0906 00:21:06.948572 1568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:06.952306 kubelet[1568]: I0906 00:21:06.952285 1568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:21:06.953385 kubelet[1568]: E0906 00:21:06.953362 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:06.975527 kubelet[1568]: I0906 00:21:06.975492 1568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:06.976043 kubelet[1568]: I0906 00:21:06.975795 1568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:21:06.980098 kubelet[1568]: E0906 00:21:06.980075 1568 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 6 00:21:06.980231 kubelet[1568]: E0906 00:21:06.980214 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:06.981615 kubelet[1568]: E0906 00:21:06.981561 1568 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:06.981828 kubelet[1568]: E0906 00:21:06.981760 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:07.930583 systemd[1]: Reloading. Sep 6 00:21:07.980006 kubelet[1568]: E0906 00:21:07.979987 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:07.980786 kubelet[1568]: E0906 00:21:07.980770 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:07.993532 /usr/lib/systemd/system-generators/torcx-generator[1881]: time="2025-09-06T00:21:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:21:07.993927 /usr/lib/systemd/system-generators/torcx-generator[1881]: time="2025-09-06T00:21:07Z" level=info msg="torcx already run" Sep 6 00:21:08.060040 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:21:08.060056 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:21:08.080229 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:21:08.193812 systemd[1]: Stopping kubelet.service... Sep 6 00:21:08.220475 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:21:08.220806 systemd[1]: Stopped kubelet.service. Sep 6 00:21:08.223110 systemd[1]: Starting kubelet.service... Sep 6 00:21:08.370762 systemd[1]: Started kubelet.service. Sep 6 00:21:08.754969 kubelet[1927]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:21:08.754969 kubelet[1927]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:21:08.754969 kubelet[1927]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:21:08.755406 kubelet[1927]: I0906 00:21:08.755008 1927 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:21:08.760789 kubelet[1927]: I0906 00:21:08.760750 1927 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 00:21:08.760789 kubelet[1927]: I0906 00:21:08.760775 1927 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:21:08.761056 kubelet[1927]: I0906 00:21:08.761032 1927 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 00:21:08.763543 kubelet[1927]: I0906 00:21:08.763523 1927 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 6 00:21:08.765627 kubelet[1927]: I0906 00:21:08.765606 1927 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:21:08.774845 kubelet[1927]: E0906 00:21:08.774806 1927 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:21:08.774845 kubelet[1927]: I0906 00:21:08.774844 1927 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:21:08.779111 kubelet[1927]: I0906 00:21:08.779083 1927 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:21:08.779350 kubelet[1927]: I0906 00:21:08.779306 1927 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:21:08.779512 kubelet[1927]: I0906 00:21:08.779342 1927 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:21:08.779624 kubelet[1927]: I0906 00:21:08.779516 1927 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:21:08.779624 kubelet[1927]: I0906 00:21:08.779531 1927 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 00:21:08.779624 kubelet[1927]: I0906 00:21:08.779580 1927 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:21:08.779714 kubelet[1927]: I0906 00:21:08.779701 1927 kubelet.go:480] "Attempting to sync node with API server" Sep 6 00:21:08.779763 kubelet[1927]: I0906 00:21:08.779723 1927 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:21:08.779763 kubelet[1927]: I0906 00:21:08.779766 1927 kubelet.go:386] "Adding apiserver pod source" Sep 6 00:21:08.779763 kubelet[1927]: I0906 00:21:08.779783 1927 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:21:08.781623 kubelet[1927]: I0906 00:21:08.781581 1927 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:21:08.782343 kubelet[1927]: I0906 00:21:08.782306 1927 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 00:21:08.790378 kubelet[1927]: I0906 00:21:08.790355 1927 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:21:08.790445 kubelet[1927]: I0906 00:21:08.790411 1927 server.go:1289] "Started kubelet" Sep 6 00:21:08.790946 kubelet[1927]: I0906 00:21:08.790837 1927 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:21:08.791140 kubelet[1927]: I0906 00:21:08.791084 1927 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:21:08.791507 kubelet[1927]: I0906 00:21:08.791436 1927 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:21:08.792507 kubelet[1927]: I0906 00:21:08.792431 1927 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:21:08.795086 kubelet[1927]: I0906 00:21:08.795055 1927 server.go:317] "Adding debug handlers to kubelet server" Sep 6 00:21:08.795733 kubelet[1927]: I0906 00:21:08.795587 1927 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:21:08.796501 kubelet[1927]: I0906 00:21:08.796473 1927 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:21:08.796842 kubelet[1927]: I0906 00:21:08.796805 1927 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:21:08.797037 kubelet[1927]: I0906 00:21:08.797015 1927 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:21:08.798339 kubelet[1927]: I0906 00:21:08.798308 1927 factory.go:223] Registration of the systemd container factory successfully Sep 6 00:21:08.798517 kubelet[1927]: I0906 00:21:08.798440 1927 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:21:08.800393 kubelet[1927]: E0906 00:21:08.800367 1927 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:21:08.800563 kubelet[1927]: I0906 00:21:08.800544 1927 factory.go:223] Registration of the containerd container factory successfully Sep 6 00:21:08.825670 kubelet[1927]: I0906 00:21:08.825618 1927 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 00:21:08.827730 kubelet[1927]: I0906 00:21:08.827714 1927 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 00:21:08.827838 kubelet[1927]: I0906 00:21:08.827823 1927 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 00:21:08.827952 kubelet[1927]: I0906 00:21:08.827932 1927 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:21:08.828040 kubelet[1927]: I0906 00:21:08.828023 1927 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 00:21:08.828228 kubelet[1927]: E0906 00:21:08.828197 1927 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:21:08.841968 kubelet[1927]: I0906 00:21:08.841942 1927 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:21:08.842151 kubelet[1927]: I0906 00:21:08.842132 1927 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:21:08.842253 kubelet[1927]: I0906 00:21:08.842240 1927 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:21:08.842476 kubelet[1927]: I0906 00:21:08.842460 1927 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:21:08.842574 kubelet[1927]: I0906 00:21:08.842540 1927 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:21:08.842669 kubelet[1927]: I0906 00:21:08.842651 1927 policy_none.go:49] "None policy: Start" Sep 6 00:21:08.842760 kubelet[1927]: I0906 00:21:08.842744 1927 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:21:08.842853 kubelet[1927]: I0906 00:21:08.842829 1927 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:21:08.843038 kubelet[1927]: I0906 00:21:08.843024 1927 state_mem.go:75] "Updated machine memory state" Sep 6 00:21:08.847618 kubelet[1927]: E0906 00:21:08.847570 1927 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 00:21:08.847853 kubelet[1927]: I0906 00:21:08.847829 1927 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:21:08.847963 kubelet[1927]: I0906 00:21:08.847934 1927 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:21:08.848438 kubelet[1927]: I0906 00:21:08.848425 1927 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:21:08.850025 kubelet[1927]: E0906 00:21:08.850011 1927 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:21:08.929438 kubelet[1927]: I0906 00:21:08.929402 1927 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:21:08.929722 kubelet[1927]: I0906 00:21:08.929408 1927 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:08.929814 kubelet[1927]: I0906 00:21:08.929414 1927 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:08.931395 sudo[1965]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:21:08.932183 sudo[1965]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:21:08.936127 kubelet[1927]: E0906 00:21:08.936080 1927 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:08.936264 kubelet[1927]: E0906 00:21:08.936235 1927 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 6 00:21:08.936599 kubelet[1927]: E0906 00:21:08.936565 1927 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:08.955869 kubelet[1927]: I0906 00:21:08.955808 1927 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:21:08.965794 kubelet[1927]: I0906 00:21:08.965757 1927 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 6 00:21:08.966029 kubelet[1927]: I0906 00:21:08.965878 1927 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 6 00:21:09.098598 kubelet[1927]: I0906 00:21:09.098482 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f582a2f89fc7cf3082cfb77328ca5243-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f582a2f89fc7cf3082cfb77328ca5243\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:09.098598 kubelet[1927]: I0906 00:21:09.098513 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f582a2f89fc7cf3082cfb77328ca5243-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f582a2f89fc7cf3082cfb77328ca5243\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:09.103349 kubelet[1927]: I0906 00:21:09.098669 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:09.103426 kubelet[1927]: I0906 00:21:09.103387 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:09.103426 kubelet[1927]: I0906 00:21:09.103411 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:09.103489 kubelet[1927]: I0906 00:21:09.103429 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:09.103489 kubelet[1927]: I0906 00:21:09.103448 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:09.103489 kubelet[1927]: I0906 00:21:09.103469 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:21:09.103489 kubelet[1927]: I0906 00:21:09.103485 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f582a2f89fc7cf3082cfb77328ca5243-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f582a2f89fc7cf3082cfb77328ca5243\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:09.236867 kubelet[1927]: E0906 00:21:09.236815 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:09.237098 kubelet[1927]: E0906 00:21:09.236925 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:09.237098 kubelet[1927]: E0906 00:21:09.237053 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:09.443868 sudo[1965]: pam_unix(sudo:session): session closed for user root Sep 6 00:21:09.780147 kubelet[1927]: I0906 00:21:09.780002 1927 apiserver.go:52] "Watching apiserver" Sep 6 00:21:09.838870 kubelet[1927]: I0906 00:21:09.838836 1927 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:21:09.839397 kubelet[1927]: I0906 00:21:09.839103 1927 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:09.839397 kubelet[1927]: I0906 00:21:09.839138 1927 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:09.945047 kubelet[1927]: E0906 00:21:09.944985 1927 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:09.945229 kubelet[1927]: E0906 00:21:09.945215 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:09.945317 kubelet[1927]: E0906 00:21:09.944996 1927 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 6 00:21:09.945596 kubelet[1927]: E0906 00:21:09.945566 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:09.946182 kubelet[1927]: E0906 00:21:09.946133 1927 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:21:09.946290 kubelet[1927]: E0906 00:21:09.946246 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:10.397822 kubelet[1927]: I0906 00:21:10.397778 1927 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:21:10.840054 kubelet[1927]: I0906 00:21:10.839954 1927 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:21:10.859995 kubelet[1927]: I0906 00:21:10.859956 1927 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:10.859995 kubelet[1927]: E0906 00:21:10.859973 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:10.910602 kubelet[1927]: E0906 00:21:10.910171 1927 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 00:21:10.910602 kubelet[1927]: E0906 00:21:10.910369 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:10.910602 kubelet[1927]: E0906 00:21:10.910517 1927 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 6 00:21:10.910602 kubelet[1927]: E0906 00:21:10.910599 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:11.086012 sudo[1306]: pam_unix(sudo:session): session closed for user root Sep 6 00:21:11.087454 sshd[1303]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:11.089758 systemd[1]: sshd@4-10.0.0.108:22-10.0.0.1:53908.service: Deactivated successfully. Sep 6 00:21:11.090578 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:21:11.090729 systemd[1]: session-5.scope: Consumed 4.803s CPU time. Sep 6 00:21:11.091593 systemd-logind[1191]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:21:11.092430 systemd-logind[1191]: Removed session 5. Sep 6 00:21:11.841539 kubelet[1927]: E0906 00:21:11.841506 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:11.841539 kubelet[1927]: E0906 00:21:11.841538 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:11.842040 kubelet[1927]: E0906 00:21:11.841737 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:12.951893 kubelet[1927]: I0906 00:21:12.951854 1927 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:21:12.952314 env[1203]: time="2025-09-06T00:21:12.952240080Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:21:12.952591 kubelet[1927]: I0906 00:21:12.952554 1927 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:21:14.268043 systemd[1]: Created slice kubepods-besteffort-pod29f9ab3d_e99f_4315_ba0e_f6f0a7d70796.slice. Sep 6 00:21:14.276149 kubelet[1927]: I0906 00:21:14.276083 1927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.276066268 podStartE2EDuration="8.276066268s" podCreationTimestamp="2025-09-06 00:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:14.275802397 +0000 UTC m=+5.550781291" watchObservedRunningTime="2025-09-06 00:21:14.276066268 +0000 UTC m=+5.551045153" Sep 6 00:21:14.277877 systemd[1]: Created slice kubepods-burstable-pod4a84b39d_750f_4000_bfd2_cce783e628fe.slice. Sep 6 00:21:14.286443 systemd[1]: Created slice kubepods-besteffort-pod53d8eded_292b_461e_8643_d515bfbc050f.slice. Sep 6 00:21:14.299833 kubelet[1927]: I0906 00:21:14.299771 1927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=8.299749757 podStartE2EDuration="8.299749757s" podCreationTimestamp="2025-09-06 00:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:14.29089853 +0000 UTC m=+5.565877424" watchObservedRunningTime="2025-09-06 00:21:14.299749757 +0000 UTC m=+5.574728651" Sep 6 00:21:14.308719 kubelet[1927]: I0906 00:21:14.308645 1927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.308628126 podStartE2EDuration="8.308628126s" podCreationTimestamp="2025-09-06 00:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:14.299742779 +0000 UTC m=+5.574721673" watchObservedRunningTime="2025-09-06 00:21:14.308628126 +0000 UTC m=+5.583607010" Sep 6 00:21:14.341805 kubelet[1927]: I0906 00:21:14.341739 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-bpf-maps\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.341805 kubelet[1927]: I0906 00:21:14.341786 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4a84b39d-750f-4000-bfd2-cce783e628fe-hubble-tls\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.341805 kubelet[1927]: I0906 00:21:14.341802 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-cni-path\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.341805 kubelet[1927]: I0906 00:21:14.341816 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-etc-cni-netd\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.342083 kubelet[1927]: I0906 00:21:14.341833 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4a84b39d-750f-4000-bfd2-cce783e628fe-clustermesh-secrets\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.342083 kubelet[1927]: I0906 00:21:14.341850 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/29f9ab3d-e99f-4315-ba0e-f6f0a7d70796-kube-proxy\") pod \"kube-proxy-xq8nw\" (UID: \"29f9ab3d-e99f-4315-ba0e-f6f0a7d70796\") " pod="kube-system/kube-proxy-xq8nw" Sep 6 00:21:14.342083 kubelet[1927]: I0906 00:21:14.341865 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29f9ab3d-e99f-4315-ba0e-f6f0a7d70796-lib-modules\") pod \"kube-proxy-xq8nw\" (UID: \"29f9ab3d-e99f-4315-ba0e-f6f0a7d70796\") " pod="kube-system/kube-proxy-xq8nw" Sep 6 00:21:14.342083 kubelet[1927]: I0906 00:21:14.341954 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-cilium-run\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.342083 kubelet[1927]: I0906 00:21:14.341997 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-hostproc\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.342083 kubelet[1927]: I0906 00:21:14.342011 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-host-proc-sys-net\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.342259 kubelet[1927]: I0906 00:21:14.342046 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-host-proc-sys-kernel\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.342259 kubelet[1927]: I0906 00:21:14.342066 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-lib-modules\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.342259 kubelet[1927]: I0906 00:21:14.342094 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqw4j\" (UniqueName: \"kubernetes.io/projected/4a84b39d-750f-4000-bfd2-cce783e628fe-kube-api-access-gqw4j\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.342259 kubelet[1927]: I0906 00:21:14.342134 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53d8eded-292b-461e-8643-d515bfbc050f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hxzdt\" (UID: \"53d8eded-292b-461e-8643-d515bfbc050f\") " pod="kube-system/cilium-operator-6c4d7847fc-hxzdt" Sep 6 00:21:14.342259 kubelet[1927]: I0906 00:21:14.342159 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b644k\" (UniqueName: \"kubernetes.io/projected/29f9ab3d-e99f-4315-ba0e-f6f0a7d70796-kube-api-access-b644k\") pod \"kube-proxy-xq8nw\" (UID: \"29f9ab3d-e99f-4315-ba0e-f6f0a7d70796\") " pod="kube-system/kube-proxy-xq8nw" Sep 6 00:21:14.342430 kubelet[1927]: I0906 00:21:14.342181 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-xtables-lock\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.342430 kubelet[1927]: I0906 00:21:14.342206 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a84b39d-750f-4000-bfd2-cce783e628fe-cilium-config-path\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.342430 kubelet[1927]: I0906 00:21:14.342229 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29f9ab3d-e99f-4315-ba0e-f6f0a7d70796-xtables-lock\") pod \"kube-proxy-xq8nw\" (UID: \"29f9ab3d-e99f-4315-ba0e-f6f0a7d70796\") " pod="kube-system/kube-proxy-xq8nw" Sep 6 00:21:14.342430 kubelet[1927]: I0906 00:21:14.342259 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-cilium-cgroup\") pod \"cilium-qrgj8\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " pod="kube-system/cilium-qrgj8" Sep 6 00:21:14.342430 kubelet[1927]: I0906 00:21:14.342298 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jpxj\" (UniqueName: \"kubernetes.io/projected/53d8eded-292b-461e-8643-d515bfbc050f-kube-api-access-7jpxj\") pod \"cilium-operator-6c4d7847fc-hxzdt\" (UID: \"53d8eded-292b-461e-8643-d515bfbc050f\") " pod="kube-system/cilium-operator-6c4d7847fc-hxzdt" Sep 6 00:21:14.444389 kubelet[1927]: I0906 00:21:14.444336 1927 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:21:14.575767 kubelet[1927]: E0906 00:21:14.575592 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:14.576279 env[1203]: time="2025-09-06T00:21:14.576237456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xq8nw,Uid:29f9ab3d-e99f-4315-ba0e-f6f0a7d70796,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:14.581397 kubelet[1927]: E0906 00:21:14.581346 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:14.581805 env[1203]: time="2025-09-06T00:21:14.581772935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qrgj8,Uid:4a84b39d-750f-4000-bfd2-cce783e628fe,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:14.589318 kubelet[1927]: E0906 00:21:14.589280 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:14.589652 env[1203]: time="2025-09-06T00:21:14.589616974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hxzdt,Uid:53d8eded-292b-461e-8643-d515bfbc050f,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:14.654124 env[1203]: time="2025-09-06T00:21:14.654044087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:14.654124 env[1203]: time="2025-09-06T00:21:14.654088255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:14.654124 env[1203]: time="2025-09-06T00:21:14.654100698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:14.654355 env[1203]: time="2025-09-06T00:21:14.654252943Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0f7f23c650a775e9d0d0418f91d5c529ec2d4c86a4b659b88292a67c6efb53d pid=2025 runtime=io.containerd.runc.v2 Sep 6 00:21:14.659579 env[1203]: time="2025-09-06T00:21:14.659498773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:14.659758 env[1203]: time="2025-09-06T00:21:14.659615904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:14.659758 env[1203]: time="2025-09-06T00:21:14.659694302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:14.661039 env[1203]: time="2025-09-06T00:21:14.660981355Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44 pid=2042 runtime=io.containerd.runc.v2 Sep 6 00:21:14.664854 env[1203]: time="2025-09-06T00:21:14.664760635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:14.665000 env[1203]: time="2025-09-06T00:21:14.664882027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:14.665000 env[1203]: time="2025-09-06T00:21:14.664935920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:14.665237 env[1203]: time="2025-09-06T00:21:14.665192071Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae167adafd997723f1f26573580af6e54353262e1c932c784d57cbfab33b623d pid=2063 runtime=io.containerd.runc.v2 Sep 6 00:21:14.673362 systemd[1]: Started cri-containerd-d0f7f23c650a775e9d0d0418f91d5c529ec2d4c86a4b659b88292a67c6efb53d.scope. Sep 6 00:21:14.680288 systemd[1]: Started cri-containerd-e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44.scope. Sep 6 00:21:14.686614 systemd[1]: Started cri-containerd-ae167adafd997723f1f26573580af6e54353262e1c932c784d57cbfab33b623d.scope. Sep 6 00:21:14.688895 kubelet[1927]: E0906 00:21:14.688766 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:14.713254 env[1203]: time="2025-09-06T00:21:14.713197487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xq8nw,Uid:29f9ab3d-e99f-4315-ba0e-f6f0a7d70796,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0f7f23c650a775e9d0d0418f91d5c529ec2d4c86a4b659b88292a67c6efb53d\"" Sep 6 00:21:14.714048 kubelet[1927]: E0906 00:21:14.714022 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:14.717953 env[1203]: time="2025-09-06T00:21:14.717514476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qrgj8,Uid:4a84b39d-750f-4000-bfd2-cce783e628fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\"" Sep 6 00:21:14.721318 kubelet[1927]: E0906 00:21:14.720861 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:14.723300 env[1203]: time="2025-09-06T00:21:14.723146734Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:21:14.724848 env[1203]: time="2025-09-06T00:21:14.724821135Z" level=info msg="CreateContainer within sandbox \"d0f7f23c650a775e9d0d0418f91d5c529ec2d4c86a4b659b88292a67c6efb53d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:21:14.741433 env[1203]: time="2025-09-06T00:21:14.741389143Z" level=info msg="CreateContainer within sandbox \"d0f7f23c650a775e9d0d0418f91d5c529ec2d4c86a4b659b88292a67c6efb53d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8dde1725621f3325aaed2472728d085e394b2342e30afa2e695562fe671de773\"" Sep 6 00:21:14.741705 env[1203]: time="2025-09-06T00:21:14.741451328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hxzdt,Uid:53d8eded-292b-461e-8643-d515bfbc050f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae167adafd997723f1f26573580af6e54353262e1c932c784d57cbfab33b623d\"" Sep 6 00:21:14.743575 kubelet[1927]: E0906 00:21:14.742353 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:14.743740 env[1203]: time="2025-09-06T00:21:14.742830075Z" level=info msg="StartContainer for \"8dde1725621f3325aaed2472728d085e394b2342e30afa2e695562fe671de773\"" Sep 6 00:21:14.760354 systemd[1]: Started cri-containerd-8dde1725621f3325aaed2472728d085e394b2342e30afa2e695562fe671de773.scope. Sep 6 00:21:14.790953 env[1203]: time="2025-09-06T00:21:14.790894235Z" level=info msg="StartContainer for \"8dde1725621f3325aaed2472728d085e394b2342e30afa2e695562fe671de773\" returns successfully" Sep 6 00:21:14.850057 kubelet[1927]: E0906 00:21:14.849243 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:14.852360 kubelet[1927]: E0906 00:21:14.852286 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:14.859087 kubelet[1927]: I0906 00:21:14.859011 1927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xq8nw" podStartSLOduration=0.85899594 podStartE2EDuration="858.99594ms" podCreationTimestamp="2025-09-06 00:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:14.85854112 +0000 UTC m=+6.133520034" watchObservedRunningTime="2025-09-06 00:21:14.85899594 +0000 UTC m=+6.133974854" Sep 6 00:21:15.773218 kubelet[1927]: E0906 00:21:15.773171 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:15.853891 kubelet[1927]: E0906 00:21:15.853859 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:16.855860 kubelet[1927]: E0906 00:21:16.855811 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:20.090757 update_engine[1195]: I0906 00:21:20.090691 1195 update_attempter.cc:509] Updating boot flags... Sep 6 00:21:21.100820 kubelet[1927]: E0906 00:21:21.100782 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:24.260536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883067740.mount: Deactivated successfully. Sep 6 00:21:28.817765 env[1203]: time="2025-09-06T00:21:28.817671381Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:28.819954 env[1203]: time="2025-09-06T00:21:28.819918743Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:28.821362 env[1203]: time="2025-09-06T00:21:28.821326432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:28.821837 env[1203]: time="2025-09-06T00:21:28.821787466Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 00:21:28.823474 env[1203]: time="2025-09-06T00:21:28.823440561Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:21:28.828299 env[1203]: time="2025-09-06T00:21:28.828269782Z" level=info msg="CreateContainer within sandbox \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:21:28.840525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1210657246.mount: Deactivated successfully. Sep 6 00:21:29.027319 env[1203]: time="2025-09-06T00:21:29.027211770Z" level=info msg="CreateContainer within sandbox \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96\"" Sep 6 00:21:29.027870 env[1203]: time="2025-09-06T00:21:29.027828005Z" level=info msg="StartContainer for \"7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96\"" Sep 6 00:21:29.042941 systemd[1]: Started cri-containerd-7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96.scope. Sep 6 00:21:29.076894 systemd[1]: cri-containerd-7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96.scope: Deactivated successfully. Sep 6 00:21:29.202380 env[1203]: time="2025-09-06T00:21:29.202236676Z" level=info msg="StartContainer for \"7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96\" returns successfully" Sep 6 00:21:29.765423 env[1203]: time="2025-09-06T00:21:29.765339099Z" level=info msg="shim disconnected" id=7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96 Sep 6 00:21:29.765423 env[1203]: time="2025-09-06T00:21:29.765406279Z" level=warning msg="cleaning up after shim disconnected" id=7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96 namespace=k8s.io Sep 6 00:21:29.765423 env[1203]: time="2025-09-06T00:21:29.765415219Z" level=info msg="cleaning up dead shim" Sep 6 00:21:29.771717 env[1203]: time="2025-09-06T00:21:29.771681381Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:21:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2370 runtime=io.containerd.runc.v2\n" Sep 6 00:21:29.838377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96-rootfs.mount: Deactivated successfully. Sep 6 00:21:29.883827 kubelet[1927]: E0906 00:21:29.883767 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:29.888674 env[1203]: time="2025-09-06T00:21:29.888619591Z" level=info msg="CreateContainer within sandbox \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:21:29.929234 env[1203]: time="2025-09-06T00:21:29.929174892Z" level=info msg="CreateContainer within sandbox \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0\"" Sep 6 00:21:29.930084 env[1203]: time="2025-09-06T00:21:29.930022086Z" level=info msg="StartContainer for \"d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0\"" Sep 6 00:21:29.948005 systemd[1]: Started cri-containerd-d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0.scope. Sep 6 00:21:29.977217 env[1203]: time="2025-09-06T00:21:29.977139194Z" level=info msg="StartContainer for \"d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0\" returns successfully" Sep 6 00:21:29.989155 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:21:29.989853 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:21:29.990214 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:21:29.992202 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:21:29.993746 systemd[1]: cri-containerd-d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0.scope: Deactivated successfully. Sep 6 00:21:30.002833 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:21:30.015672 env[1203]: time="2025-09-06T00:21:30.015558231Z" level=info msg="shim disconnected" id=d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0 Sep 6 00:21:30.015672 env[1203]: time="2025-09-06T00:21:30.015608171Z" level=warning msg="cleaning up after shim disconnected" id=d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0 namespace=k8s.io Sep 6 00:21:30.015672 env[1203]: time="2025-09-06T00:21:30.015619246Z" level=info msg="cleaning up dead shim" Sep 6 00:21:30.023553 env[1203]: time="2025-09-06T00:21:30.023508694Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:21:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2434 runtime=io.containerd.runc.v2\n" Sep 6 00:21:30.838109 systemd[1]: run-containerd-runc-k8s.io-d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0-runc.HbV4Ok.mount: Deactivated successfully. Sep 6 00:21:30.838220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0-rootfs.mount: Deactivated successfully. Sep 6 00:21:30.886767 kubelet[1927]: E0906 00:21:30.886476 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:30.893244 env[1203]: time="2025-09-06T00:21:30.893179542Z" level=info msg="CreateContainer within sandbox \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:21:30.910022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount542890420.mount: Deactivated successfully. Sep 6 00:21:30.914316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676084006.mount: Deactivated successfully. Sep 6 00:21:30.921081 env[1203]: time="2025-09-06T00:21:30.921020132Z" level=info msg="CreateContainer within sandbox \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec\"" Sep 6 00:21:30.921882 env[1203]: time="2025-09-06T00:21:30.921828547Z" level=info msg="StartContainer for \"0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec\"" Sep 6 00:21:30.937246 systemd[1]: Started cri-containerd-0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec.scope. Sep 6 00:21:30.968187 env[1203]: time="2025-09-06T00:21:30.968034448Z" level=info msg="StartContainer for \"0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec\" returns successfully" Sep 6 00:21:30.968382 systemd[1]: cri-containerd-0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec.scope: Deactivated successfully. Sep 6 00:21:31.336161 env[1203]: time="2025-09-06T00:21:31.336085259Z" level=info msg="shim disconnected" id=0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec Sep 6 00:21:31.336161 env[1203]: time="2025-09-06T00:21:31.336158661Z" level=warning msg="cleaning up after shim disconnected" id=0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec namespace=k8s.io Sep 6 00:21:31.336409 env[1203]: time="2025-09-06T00:21:31.336174556Z" level=info msg="cleaning up dead shim" Sep 6 00:21:31.342069 env[1203]: time="2025-09-06T00:21:31.342003608Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:21:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2489 runtime=io.containerd.runc.v2\n" Sep 6 00:21:31.359632 env[1203]: time="2025-09-06T00:21:31.359590091Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:31.361339 env[1203]: time="2025-09-06T00:21:31.361278165Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:31.363135 env[1203]: time="2025-09-06T00:21:31.363087329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:21:31.363521 env[1203]: time="2025-09-06T00:21:31.363488539Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 00:21:31.368679 env[1203]: time="2025-09-06T00:21:31.368641109Z" level=info msg="CreateContainer within sandbox \"ae167adafd997723f1f26573580af6e54353262e1c932c784d57cbfab33b623d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:21:31.381409 env[1203]: time="2025-09-06T00:21:31.381375147Z" level=info msg="CreateContainer within sandbox \"ae167adafd997723f1f26573580af6e54353262e1c932c784d57cbfab33b623d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\"" Sep 6 00:21:31.381812 env[1203]: time="2025-09-06T00:21:31.381781359Z" level=info msg="StartContainer for \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\"" Sep 6 00:21:31.395304 systemd[1]: Started cri-containerd-bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f.scope. Sep 6 00:21:31.417590 env[1203]: time="2025-09-06T00:21:31.417539343Z" level=info msg="StartContainer for \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\" returns successfully" Sep 6 00:21:31.889550 kubelet[1927]: E0906 00:21:31.889503 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:31.891179 kubelet[1927]: E0906 00:21:31.891149 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:32.335460 env[1203]: time="2025-09-06T00:21:32.335337134Z" level=info msg="CreateContainer within sandbox \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:21:32.892648 kubelet[1927]: E0906 00:21:32.892597 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:33.395935 env[1203]: time="2025-09-06T00:21:33.395853468Z" level=info msg="CreateContainer within sandbox \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7\"" Sep 6 00:21:33.396404 env[1203]: time="2025-09-06T00:21:33.396370391Z" level=info msg="StartContainer for \"345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7\"" Sep 6 00:21:33.414234 systemd[1]: Started cri-containerd-345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7.scope. Sep 6 00:21:33.435891 systemd[1]: cri-containerd-345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7.scope: Deactivated successfully. Sep 6 00:21:33.479232 env[1203]: time="2025-09-06T00:21:33.479177179Z" level=info msg="StartContainer for \"345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7\" returns successfully" Sep 6 00:21:33.515797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7-rootfs.mount: Deactivated successfully. Sep 6 00:21:33.604757 systemd[1]: Started sshd@5-10.0.0.108:22-10.0.0.1:37480.service. Sep 6 00:21:33.644769 sshd[2581]: Accepted publickey for core from 10.0.0.1 port 37480 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:21:33.646106 sshd[2581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:33.658342 systemd-logind[1191]: New session 6 of user core. Sep 6 00:21:33.659206 systemd[1]: Started session-6.scope. Sep 6 00:21:33.688190 env[1203]: time="2025-09-06T00:21:33.687476436Z" level=info msg="shim disconnected" id=345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7 Sep 6 00:21:33.688190 env[1203]: time="2025-09-06T00:21:33.687530394Z" level=warning msg="cleaning up after shim disconnected" id=345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7 namespace=k8s.io Sep 6 00:21:33.688190 env[1203]: time="2025-09-06T00:21:33.687539023Z" level=info msg="cleaning up dead shim" Sep 6 00:21:33.701185 env[1203]: time="2025-09-06T00:21:33.701147427Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:21:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2584 runtime=io.containerd.runc.v2\n" Sep 6 00:21:33.853341 kubelet[1927]: I0906 00:21:33.853273 1927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hxzdt" podStartSLOduration=3.23297159 podStartE2EDuration="19.853255625s" podCreationTimestamp="2025-09-06 00:21:14 +0000 UTC" firstStartedPulling="2025-09-06 00:21:14.744164674 +0000 UTC m=+6.019143568" lastFinishedPulling="2025-09-06 00:21:31.364448709 +0000 UTC m=+22.639427603" observedRunningTime="2025-09-06 00:21:32.611653944 +0000 UTC m=+23.886632838" watchObservedRunningTime="2025-09-06 00:21:33.853255625 +0000 UTC m=+25.128234519" Sep 6 00:21:33.895824 kubelet[1927]: E0906 00:21:33.895788 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:34.065325 env[1203]: time="2025-09-06T00:21:34.062395427Z" level=info msg="CreateContainer within sandbox \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:21:34.170395 sshd[2581]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:34.173384 systemd[1]: sshd@5-10.0.0.108:22-10.0.0.1:37480.service: Deactivated successfully. Sep 6 00:21:34.174539 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:21:34.175152 systemd-logind[1191]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:21:34.176010 systemd-logind[1191]: Removed session 6. Sep 6 00:21:34.660524 env[1203]: time="2025-09-06T00:21:34.660438752Z" level=info msg="CreateContainer within sandbox \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\"" Sep 6 00:21:34.661483 env[1203]: time="2025-09-06T00:21:34.661425212Z" level=info msg="StartContainer for \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\"" Sep 6 00:21:34.686379 systemd[1]: run-containerd-runc-k8s.io-fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050-runc.2xVrS3.mount: Deactivated successfully. Sep 6 00:21:34.687768 systemd[1]: Started cri-containerd-fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050.scope. Sep 6 00:21:34.763967 env[1203]: time="2025-09-06T00:21:34.763904611Z" level=info msg="StartContainer for \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\" returns successfully" Sep 6 00:21:34.901333 kubelet[1927]: E0906 00:21:34.901271 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:34.920818 kubelet[1927]: I0906 00:21:34.920679 1927 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 00:21:34.951502 kubelet[1927]: I0906 00:21:34.951432 1927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qrgj8" podStartSLOduration=6.850710942 podStartE2EDuration="20.951412334s" podCreationTimestamp="2025-09-06 00:21:14 +0000 UTC" firstStartedPulling="2025-09-06 00:21:14.722610918 +0000 UTC m=+5.997589813" lastFinishedPulling="2025-09-06 00:21:28.823312311 +0000 UTC m=+20.098291205" observedRunningTime="2025-09-06 00:21:34.919165914 +0000 UTC m=+26.194144808" watchObservedRunningTime="2025-09-06 00:21:34.951412334 +0000 UTC m=+26.226391218" Sep 6 00:21:35.003517 systemd[1]: Created slice kubepods-burstable-pod8f3d2b01_fef5_4e80_9f00_212fe7ace8cf.slice. Sep 6 00:21:35.008757 systemd[1]: Created slice kubepods-burstable-pod150c14a9_510c_4007_8820_52cf76c3447c.slice. Sep 6 00:21:35.084439 kubelet[1927]: I0906 00:21:35.084397 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/150c14a9-510c-4007-8820-52cf76c3447c-config-volume\") pod \"coredns-674b8bbfcf-slp2h\" (UID: \"150c14a9-510c-4007-8820-52cf76c3447c\") " pod="kube-system/coredns-674b8bbfcf-slp2h" Sep 6 00:21:35.084439 kubelet[1927]: I0906 00:21:35.084438 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqkkc\" (UniqueName: \"kubernetes.io/projected/8f3d2b01-fef5-4e80-9f00-212fe7ace8cf-kube-api-access-xqkkc\") pod \"coredns-674b8bbfcf-mxhzd\" (UID: \"8f3d2b01-fef5-4e80-9f00-212fe7ace8cf\") " pod="kube-system/coredns-674b8bbfcf-mxhzd" Sep 6 00:21:35.084439 kubelet[1927]: I0906 00:21:35.084454 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v74z\" (UniqueName: \"kubernetes.io/projected/150c14a9-510c-4007-8820-52cf76c3447c-kube-api-access-5v74z\") pod \"coredns-674b8bbfcf-slp2h\" (UID: \"150c14a9-510c-4007-8820-52cf76c3447c\") " pod="kube-system/coredns-674b8bbfcf-slp2h" Sep 6 00:21:35.084666 kubelet[1927]: I0906 00:21:35.084469 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f3d2b01-fef5-4e80-9f00-212fe7ace8cf-config-volume\") pod \"coredns-674b8bbfcf-mxhzd\" (UID: \"8f3d2b01-fef5-4e80-9f00-212fe7ace8cf\") " pod="kube-system/coredns-674b8bbfcf-mxhzd" Sep 6 00:21:35.306974 kubelet[1927]: E0906 00:21:35.306829 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:35.307598 env[1203]: time="2025-09-06T00:21:35.307553617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mxhzd,Uid:8f3d2b01-fef5-4e80-9f00-212fe7ace8cf,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:35.311486 kubelet[1927]: E0906 00:21:35.311459 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:35.311877 env[1203]: time="2025-09-06T00:21:35.311829878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-slp2h,Uid:150c14a9-510c-4007-8820-52cf76c3447c,Namespace:kube-system,Attempt:0,}" Sep 6 00:21:35.903468 kubelet[1927]: E0906 00:21:35.903410 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:36.905040 kubelet[1927]: E0906 00:21:36.905011 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:37.007369 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 00:21:37.007478 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:21:37.005336 systemd-networkd[1025]: cilium_host: Link UP Sep 6 00:21:37.005573 systemd-networkd[1025]: cilium_net: Link UP Sep 6 00:21:37.008196 systemd-networkd[1025]: cilium_net: Gained carrier Sep 6 00:21:37.008362 systemd-networkd[1025]: cilium_host: Gained carrier Sep 6 00:21:37.086343 systemd-networkd[1025]: cilium_vxlan: Link UP Sep 6 00:21:37.086352 systemd-networkd[1025]: cilium_vxlan: Gained carrier Sep 6 00:21:37.309950 kernel: NET: Registered PF_ALG protocol family Sep 6 00:21:37.606075 systemd-networkd[1025]: cilium_net: Gained IPv6LL Sep 6 00:21:37.670102 systemd-networkd[1025]: cilium_host: Gained IPv6LL Sep 6 00:21:37.926281 systemd-networkd[1025]: lxc_health: Link UP Sep 6 00:21:37.935791 systemd-networkd[1025]: lxc_health: Gained carrier Sep 6 00:21:37.935991 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:21:38.350407 systemd-networkd[1025]: lxcb108bd1209ba: Link UP Sep 6 00:21:38.358202 systemd-networkd[1025]: lxc4a6fd97ce7fd: Link UP Sep 6 00:21:38.363954 kernel: eth0: renamed from tmp3d7c7 Sep 6 00:21:38.372718 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:21:38.372821 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb108bd1209ba: link becomes ready Sep 6 00:21:38.373715 kernel: eth0: renamed from tmpe5cbc Sep 6 00:21:38.384579 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:21:38.384711 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4a6fd97ce7fd: link becomes ready Sep 6 00:21:38.381722 systemd-networkd[1025]: lxcb108bd1209ba: Gained carrier Sep 6 00:21:38.384669 systemd-networkd[1025]: lxc4a6fd97ce7fd: Gained carrier Sep 6 00:21:38.583281 kubelet[1927]: E0906 00:21:38.583244 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:38.907471 kubelet[1927]: E0906 00:21:38.907435 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:39.024284 systemd-networkd[1025]: cilium_vxlan: Gained IPv6LL Sep 6 00:21:39.176462 systemd[1]: Started sshd@6-10.0.0.108:22-10.0.0.1:37486.service. Sep 6 00:21:39.237689 sshd[3153]: Accepted publickey for core from 10.0.0.1 port 37486 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:21:39.239337 sshd[3153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:39.244140 systemd-logind[1191]: New session 7 of user core. Sep 6 00:21:39.245247 systemd[1]: Started session-7.scope. Sep 6 00:21:39.552793 sshd[3153]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:39.555372 systemd[1]: sshd@6-10.0.0.108:22-10.0.0.1:37486.service: Deactivated successfully. Sep 6 00:21:39.556329 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:21:39.557395 systemd-logind[1191]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:21:39.558203 systemd-logind[1191]: Removed session 7. Sep 6 00:21:39.654317 systemd-networkd[1025]: lxc4a6fd97ce7fd: Gained IPv6LL Sep 6 00:21:39.783060 systemd-networkd[1025]: lxc_health: Gained IPv6LL Sep 6 00:21:39.909016 kubelet[1927]: E0906 00:21:39.908983 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:40.048207 systemd-networkd[1025]: lxcb108bd1209ba: Gained IPv6LL Sep 6 00:21:41.818757 env[1203]: time="2025-09-06T00:21:41.818542375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:41.818757 env[1203]: time="2025-09-06T00:21:41.818604035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:41.818757 env[1203]: time="2025-09-06T00:21:41.818614567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:41.819173 env[1203]: time="2025-09-06T00:21:41.818818218Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d7c7f6dc845a9992b4e9109465111d967102c5f8e3df3c58ec894a903b72826 pid=3188 runtime=io.containerd.runc.v2 Sep 6 00:21:41.823267 env[1203]: time="2025-09-06T00:21:41.822280685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:21:41.823267 env[1203]: time="2025-09-06T00:21:41.822328035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:21:41.823267 env[1203]: time="2025-09-06T00:21:41.822338187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:21:41.823267 env[1203]: time="2025-09-06T00:21:41.822477571Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5cbce759f0e44e6c00fdbbbe27fdadbfd48dcb75ce33717515f7e76142e3b6b pid=3205 runtime=io.containerd.runc.v2 Sep 6 00:21:41.833618 systemd[1]: Started cri-containerd-3d7c7f6dc845a9992b4e9109465111d967102c5f8e3df3c58ec894a903b72826.scope. Sep 6 00:21:41.841062 systemd[1]: Started cri-containerd-e5cbce759f0e44e6c00fdbbbe27fdadbfd48dcb75ce33717515f7e76142e3b6b.scope. Sep 6 00:21:41.849024 systemd-resolved[1143]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:21:41.859254 systemd-resolved[1143]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:21:41.878288 env[1203]: time="2025-09-06T00:21:41.878244961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-slp2h,Uid:150c14a9-510c-4007-8820-52cf76c3447c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d7c7f6dc845a9992b4e9109465111d967102c5f8e3df3c58ec894a903b72826\"" Sep 6 00:21:41.878901 kubelet[1927]: E0906 00:21:41.878867 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:41.883843 env[1203]: time="2025-09-06T00:21:41.883797625Z" level=info msg="CreateContainer within sandbox \"3d7c7f6dc845a9992b4e9109465111d967102c5f8e3df3c58ec894a903b72826\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:21:41.887363 env[1203]: time="2025-09-06T00:21:41.887308515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mxhzd,Uid:8f3d2b01-fef5-4e80-9f00-212fe7ace8cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5cbce759f0e44e6c00fdbbbe27fdadbfd48dcb75ce33717515f7e76142e3b6b\"" Sep 6 00:21:41.888194 kubelet[1927]: E0906 00:21:41.888142 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:41.892694 env[1203]: time="2025-09-06T00:21:41.892658982Z" level=info msg="CreateContainer within sandbox \"e5cbce759f0e44e6c00fdbbbe27fdadbfd48dcb75ce33717515f7e76142e3b6b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:21:41.899682 env[1203]: time="2025-09-06T00:21:41.899637729Z" level=info msg="CreateContainer within sandbox \"3d7c7f6dc845a9992b4e9109465111d967102c5f8e3df3c58ec894a903b72826\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48d7197647f3b749a6ab537deb4ae6c8ae9e6635a4564e94b5b8d14beb87ea5f\"" Sep 6 00:21:41.900090 env[1203]: time="2025-09-06T00:21:41.899977527Z" level=info msg="StartContainer for \"48d7197647f3b749a6ab537deb4ae6c8ae9e6635a4564e94b5b8d14beb87ea5f\"" Sep 6 00:21:41.907760 env[1203]: time="2025-09-06T00:21:41.907712424Z" level=info msg="CreateContainer within sandbox \"e5cbce759f0e44e6c00fdbbbe27fdadbfd48dcb75ce33717515f7e76142e3b6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da42b09c91e1a3b949c82f221fd49a5694343ff75b120e9f9a200bd3fcdc5e59\"" Sep 6 00:21:41.908320 env[1203]: time="2025-09-06T00:21:41.908222391Z" level=info msg="StartContainer for \"da42b09c91e1a3b949c82f221fd49a5694343ff75b120e9f9a200bd3fcdc5e59\"" Sep 6 00:21:41.918175 systemd[1]: Started cri-containerd-48d7197647f3b749a6ab537deb4ae6c8ae9e6635a4564e94b5b8d14beb87ea5f.scope. Sep 6 00:21:41.930212 systemd[1]: Started cri-containerd-da42b09c91e1a3b949c82f221fd49a5694343ff75b120e9f9a200bd3fcdc5e59.scope. Sep 6 00:21:41.951831 env[1203]: time="2025-09-06T00:21:41.951793858Z" level=info msg="StartContainer for \"48d7197647f3b749a6ab537deb4ae6c8ae9e6635a4564e94b5b8d14beb87ea5f\" returns successfully" Sep 6 00:21:41.956190 env[1203]: time="2025-09-06T00:21:41.956157299Z" level=info msg="StartContainer for \"da42b09c91e1a3b949c82f221fd49a5694343ff75b120e9f9a200bd3fcdc5e59\" returns successfully" Sep 6 00:21:42.920277 kubelet[1927]: E0906 00:21:42.920233 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:42.921862 kubelet[1927]: E0906 00:21:42.921828 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:43.118598 kubelet[1927]: I0906 00:21:43.116770 1927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mxhzd" podStartSLOduration=29.116751945 podStartE2EDuration="29.116751945s" podCreationTimestamp="2025-09-06 00:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:43.116356064 +0000 UTC m=+34.391334958" watchObservedRunningTime="2025-09-06 00:21:43.116751945 +0000 UTC m=+34.391730840" Sep 6 00:21:43.118598 kubelet[1927]: I0906 00:21:43.116862 1927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-slp2h" podStartSLOduration=29.116858269 podStartE2EDuration="29.116858269s" podCreationTimestamp="2025-09-06 00:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:21:43.07501021 +0000 UTC m=+34.349989124" watchObservedRunningTime="2025-09-06 00:21:43.116858269 +0000 UTC m=+34.391837183" Sep 6 00:21:43.923785 kubelet[1927]: E0906 00:21:43.923743 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:43.924215 kubelet[1927]: E0906 00:21:43.923859 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:44.558801 systemd[1]: Started sshd@7-10.0.0.108:22-10.0.0.1:50284.service. Sep 6 00:21:44.594880 sshd[3348]: Accepted publickey for core from 10.0.0.1 port 50284 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:21:44.596343 sshd[3348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:44.599568 systemd-logind[1191]: New session 8 of user core. Sep 6 00:21:44.600608 systemd[1]: Started session-8.scope. Sep 6 00:21:44.727025 sshd[3348]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:44.729024 systemd[1]: sshd@7-10.0.0.108:22-10.0.0.1:50284.service: Deactivated successfully. Sep 6 00:21:44.729808 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:21:44.730385 systemd-logind[1191]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:21:44.731056 systemd-logind[1191]: Removed session 8. Sep 6 00:21:44.924788 kubelet[1927]: E0906 00:21:44.924758 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:44.925241 kubelet[1927]: E0906 00:21:44.924884 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:21:49.732887 systemd[1]: Started sshd@8-10.0.0.108:22-10.0.0.1:50296.service. Sep 6 00:21:49.766198 sshd[3364]: Accepted publickey for core from 10.0.0.1 port 50296 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:21:49.767699 sshd[3364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:49.771060 systemd-logind[1191]: New session 9 of user core. Sep 6 00:21:49.771867 systemd[1]: Started session-9.scope. Sep 6 00:21:50.015246 sshd[3364]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:50.017763 systemd[1]: sshd@8-10.0.0.108:22-10.0.0.1:50296.service: Deactivated successfully. Sep 6 00:21:50.018452 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:21:50.019147 systemd-logind[1191]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:21:50.019971 systemd-logind[1191]: Removed session 9. Sep 6 00:21:55.022959 systemd[1]: Started sshd@9-10.0.0.108:22-10.0.0.1:34618.service. Sep 6 00:21:55.199730 sshd[3378]: Accepted publickey for core from 10.0.0.1 port 34618 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:21:55.201190 sshd[3378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:55.205507 systemd-logind[1191]: New session 10 of user core. Sep 6 00:21:55.206725 systemd[1]: Started session-10.scope. Sep 6 00:21:55.360812 sshd[3378]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:55.363688 systemd[1]: sshd@9-10.0.0.108:22-10.0.0.1:34618.service: Deactivated successfully. Sep 6 00:21:55.364368 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:21:55.365031 systemd-logind[1191]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:21:55.365945 systemd[1]: Started sshd@10-10.0.0.108:22-10.0.0.1:34626.service. Sep 6 00:21:55.368210 systemd-logind[1191]: Removed session 10. Sep 6 00:21:55.400416 sshd[3393]: Accepted publickey for core from 10.0.0.1 port 34626 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:21:55.401461 sshd[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:55.404789 systemd-logind[1191]: New session 11 of user core. Sep 6 00:21:55.405698 systemd[1]: Started session-11.scope. Sep 6 00:21:55.789332 sshd[3393]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:55.793019 systemd[1]: Started sshd@11-10.0.0.108:22-10.0.0.1:34630.service. Sep 6 00:21:55.795784 systemd[1]: sshd@10-10.0.0.108:22-10.0.0.1:34626.service: Deactivated successfully. Sep 6 00:21:55.796486 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:21:55.797497 systemd-logind[1191]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:21:55.798378 systemd-logind[1191]: Removed session 11. Sep 6 00:21:55.827226 sshd[3404]: Accepted publickey for core from 10.0.0.1 port 34630 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:21:55.828581 sshd[3404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:21:55.832090 systemd-logind[1191]: New session 12 of user core. Sep 6 00:21:55.832971 systemd[1]: Started session-12.scope. Sep 6 00:21:55.932518 sshd[3404]: pam_unix(sshd:session): session closed for user core Sep 6 00:21:55.935096 systemd[1]: sshd@11-10.0.0.108:22-10.0.0.1:34630.service: Deactivated successfully. Sep 6 00:21:55.935737 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:21:55.936335 systemd-logind[1191]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:21:55.936928 systemd-logind[1191]: Removed session 12. Sep 6 00:22:00.937949 systemd[1]: Started sshd@12-10.0.0.108:22-10.0.0.1:36824.service. Sep 6 00:22:00.972017 sshd[3418]: Accepted publickey for core from 10.0.0.1 port 36824 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:00.973253 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:00.976831 systemd-logind[1191]: New session 13 of user core. Sep 6 00:22:00.977889 systemd[1]: Started session-13.scope. Sep 6 00:22:01.084295 sshd[3418]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:01.087201 systemd[1]: sshd@12-10.0.0.108:22-10.0.0.1:36824.service: Deactivated successfully. Sep 6 00:22:01.088015 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:22:01.088667 systemd-logind[1191]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:22:01.089460 systemd-logind[1191]: Removed session 13. Sep 6 00:22:06.089138 systemd[1]: Started sshd@13-10.0.0.108:22-10.0.0.1:36836.service. Sep 6 00:22:06.128412 sshd[3431]: Accepted publickey for core from 10.0.0.1 port 36836 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:06.129580 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:06.133428 systemd-logind[1191]: New session 14 of user core. Sep 6 00:22:06.134561 systemd[1]: Started session-14.scope. Sep 6 00:22:06.241141 sshd[3431]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:06.244435 systemd[1]: sshd@13-10.0.0.108:22-10.0.0.1:36836.service: Deactivated successfully. Sep 6 00:22:06.245034 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:22:06.245568 systemd-logind[1191]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:22:06.246742 systemd[1]: Started sshd@14-10.0.0.108:22-10.0.0.1:36850.service. Sep 6 00:22:06.247559 systemd-logind[1191]: Removed session 14. Sep 6 00:22:06.281400 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 36850 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:06.282833 sshd[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:06.286708 systemd-logind[1191]: New session 15 of user core. Sep 6 00:22:06.287556 systemd[1]: Started session-15.scope. Sep 6 00:22:06.588477 sshd[3444]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:06.591648 systemd[1]: sshd@14-10.0.0.108:22-10.0.0.1:36850.service: Deactivated successfully. Sep 6 00:22:06.592372 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:22:06.593101 systemd-logind[1191]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:22:06.594504 systemd[1]: Started sshd@15-10.0.0.108:22-10.0.0.1:36858.service. Sep 6 00:22:06.595477 systemd-logind[1191]: Removed session 15. Sep 6 00:22:06.635387 sshd[3456]: Accepted publickey for core from 10.0.0.1 port 36858 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:06.637083 sshd[3456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:06.641293 systemd-logind[1191]: New session 16 of user core. Sep 6 00:22:06.642181 systemd[1]: Started session-16.scope. Sep 6 00:22:07.162117 sshd[3456]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:07.164711 systemd[1]: sshd@15-10.0.0.108:22-10.0.0.1:36858.service: Deactivated successfully. Sep 6 00:22:07.165206 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:22:07.166946 systemd[1]: Started sshd@16-10.0.0.108:22-10.0.0.1:36870.service. Sep 6 00:22:07.168204 systemd-logind[1191]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:22:07.169606 systemd-logind[1191]: Removed session 16. Sep 6 00:22:07.204622 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 36870 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:07.205789 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:07.209436 systemd-logind[1191]: New session 17 of user core. Sep 6 00:22:07.210481 systemd[1]: Started session-17.scope. Sep 6 00:22:07.437026 sshd[3475]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:07.442414 systemd[1]: Started sshd@17-10.0.0.108:22-10.0.0.1:36874.service. Sep 6 00:22:07.443029 systemd[1]: sshd@16-10.0.0.108:22-10.0.0.1:36870.service: Deactivated successfully. Sep 6 00:22:07.445587 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:22:07.446990 systemd-logind[1191]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:22:07.447878 systemd-logind[1191]: Removed session 17. Sep 6 00:22:07.478586 sshd[3487]: Accepted publickey for core from 10.0.0.1 port 36874 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:07.479612 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:07.483118 systemd-logind[1191]: New session 18 of user core. Sep 6 00:22:07.483999 systemd[1]: Started session-18.scope. Sep 6 00:22:07.591520 sshd[3487]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:07.593618 systemd[1]: sshd@17-10.0.0.108:22-10.0.0.1:36874.service: Deactivated successfully. Sep 6 00:22:07.594556 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:22:07.595128 systemd-logind[1191]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:22:07.595772 systemd-logind[1191]: Removed session 18. Sep 6 00:22:12.596840 systemd[1]: Started sshd@18-10.0.0.108:22-10.0.0.1:37184.service. Sep 6 00:22:12.631770 sshd[3504]: Accepted publickey for core from 10.0.0.1 port 37184 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:12.633155 sshd[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:12.636898 systemd-logind[1191]: New session 19 of user core. Sep 6 00:22:12.637961 systemd[1]: Started session-19.scope. Sep 6 00:22:12.743137 sshd[3504]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:12.745507 systemd[1]: sshd@18-10.0.0.108:22-10.0.0.1:37184.service: Deactivated successfully. Sep 6 00:22:12.746345 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:22:12.746857 systemd-logind[1191]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:22:12.747571 systemd-logind[1191]: Removed session 19. Sep 6 00:22:17.746771 systemd[1]: Started sshd@19-10.0.0.108:22-10.0.0.1:37194.service. Sep 6 00:22:17.780145 sshd[3522]: Accepted publickey for core from 10.0.0.1 port 37194 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:17.781341 sshd[3522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:17.784647 systemd-logind[1191]: New session 20 of user core. Sep 6 00:22:17.785491 systemd[1]: Started session-20.scope. Sep 6 00:22:17.934769 sshd[3522]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:17.936774 systemd[1]: sshd@19-10.0.0.108:22-10.0.0.1:37194.service: Deactivated successfully. Sep 6 00:22:17.937601 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:22:17.938078 systemd-logind[1191]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:22:17.938695 systemd-logind[1191]: Removed session 20. Sep 6 00:22:22.938691 systemd[1]: Started sshd@20-10.0.0.108:22-10.0.0.1:46128.service. Sep 6 00:22:22.973951 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 46128 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:22.975299 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:22.979026 systemd-logind[1191]: New session 21 of user core. Sep 6 00:22:22.980065 systemd[1]: Started session-21.scope. Sep 6 00:22:23.082666 sshd[3535]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:23.086167 systemd[1]: sshd@20-10.0.0.108:22-10.0.0.1:46128.service: Deactivated successfully. Sep 6 00:22:23.086857 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:22:23.087491 systemd-logind[1191]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:22:23.088848 systemd[1]: Started sshd@21-10.0.0.108:22-10.0.0.1:46144.service. Sep 6 00:22:23.090264 systemd-logind[1191]: Removed session 21. Sep 6 00:22:23.123146 sshd[3548]: Accepted publickey for core from 10.0.0.1 port 46144 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:23.124543 sshd[3548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:23.128283 systemd-logind[1191]: New session 22 of user core. Sep 6 00:22:23.129199 systemd[1]: Started session-22.scope. Sep 6 00:22:25.246019 env[1203]: time="2025-09-06T00:22:25.245960371Z" level=info msg="StopContainer for \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\" with timeout 30 (s)" Sep 6 00:22:25.247754 env[1203]: time="2025-09-06T00:22:25.247724459Z" level=info msg="Stop container \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\" with signal terminated" Sep 6 00:22:25.257857 systemd[1]: cri-containerd-bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f.scope: Deactivated successfully. Sep 6 00:22:25.271584 env[1203]: time="2025-09-06T00:22:25.271516386Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:22:25.277162 env[1203]: time="2025-09-06T00:22:25.277106627Z" level=info msg="StopContainer for \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\" with timeout 2 (s)" Sep 6 00:22:25.277323 env[1203]: time="2025-09-06T00:22:25.277299023Z" level=info msg="Stop container \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\" with signal terminated" Sep 6 00:22:25.279032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f-rootfs.mount: Deactivated successfully. Sep 6 00:22:25.284338 systemd-networkd[1025]: lxc_health: Link DOWN Sep 6 00:22:25.284347 systemd-networkd[1025]: lxc_health: Lost carrier Sep 6 00:22:25.286192 env[1203]: time="2025-09-06T00:22:25.286142411Z" level=info msg="shim disconnected" id=bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f Sep 6 00:22:25.286192 env[1203]: time="2025-09-06T00:22:25.286187514Z" level=warning msg="cleaning up after shim disconnected" id=bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f namespace=k8s.io Sep 6 00:22:25.286192 env[1203]: time="2025-09-06T00:22:25.286197392Z" level=info msg="cleaning up dead shim" Sep 6 00:22:25.293129 env[1203]: time="2025-09-06T00:22:25.293077806Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3600 runtime=io.containerd.runc.v2\n" Sep 6 00:22:25.296482 env[1203]: time="2025-09-06T00:22:25.296440304Z" level=info msg="StopContainer for \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\" returns successfully" Sep 6 00:22:25.297282 env[1203]: time="2025-09-06T00:22:25.297202511Z" level=info msg="StopPodSandbox for \"ae167adafd997723f1f26573580af6e54353262e1c932c784d57cbfab33b623d\"" Sep 6 00:22:25.299377 env[1203]: time="2025-09-06T00:22:25.297396469Z" level=info msg="Container to stop \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:25.299271 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae167adafd997723f1f26573580af6e54353262e1c932c784d57cbfab33b623d-shm.mount: Deactivated successfully. Sep 6 00:22:25.308275 systemd[1]: cri-containerd-ae167adafd997723f1f26573580af6e54353262e1c932c784d57cbfab33b623d.scope: Deactivated successfully. Sep 6 00:22:25.330595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae167adafd997723f1f26573580af6e54353262e1c932c784d57cbfab33b623d-rootfs.mount: Deactivated successfully. Sep 6 00:22:25.338206 systemd[1]: cri-containerd-fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050.scope: Deactivated successfully. Sep 6 00:22:25.338459 systemd[1]: cri-containerd-fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050.scope: Consumed 6.075s CPU time. Sep 6 00:22:25.340717 env[1203]: time="2025-09-06T00:22:25.340656723Z" level=info msg="shim disconnected" id=ae167adafd997723f1f26573580af6e54353262e1c932c784d57cbfab33b623d Sep 6 00:22:25.340717 env[1203]: time="2025-09-06T00:22:25.340715301Z" level=warning msg="cleaning up after shim disconnected" id=ae167adafd997723f1f26573580af6e54353262e1c932c784d57cbfab33b623d namespace=k8s.io Sep 6 00:22:25.341028 env[1203]: time="2025-09-06T00:22:25.340724929Z" level=info msg="cleaning up dead shim" Sep 6 00:22:25.347251 env[1203]: time="2025-09-06T00:22:25.347210573Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3634 runtime=io.containerd.runc.v2\n" Sep 6 00:22:25.347970 env[1203]: time="2025-09-06T00:22:25.347876222Z" level=info msg="TearDown network for sandbox \"ae167adafd997723f1f26573580af6e54353262e1c932c784d57cbfab33b623d\" successfully" Sep 6 00:22:25.347970 env[1203]: time="2025-09-06T00:22:25.347902320Z" level=info msg="StopPodSandbox for \"ae167adafd997723f1f26573580af6e54353262e1c932c784d57cbfab33b623d\" returns successfully" Sep 6 00:22:25.356587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050-rootfs.mount: Deactivated successfully. Sep 6 00:22:25.364298 env[1203]: time="2025-09-06T00:22:25.364234826Z" level=info msg="shim disconnected" id=fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050 Sep 6 00:22:25.364298 env[1203]: time="2025-09-06T00:22:25.364288454Z" level=warning msg="cleaning up after shim disconnected" id=fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050 namespace=k8s.io Sep 6 00:22:25.364298 env[1203]: time="2025-09-06T00:22:25.364297962Z" level=info msg="cleaning up dead shim" Sep 6 00:22:25.370925 env[1203]: time="2025-09-06T00:22:25.370861230Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3658 runtime=io.containerd.runc.v2\n" Sep 6 00:22:25.374059 env[1203]: time="2025-09-06T00:22:25.374012128Z" level=info msg="StopContainer for \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\" returns successfully" Sep 6 00:22:25.374517 env[1203]: time="2025-09-06T00:22:25.374487355Z" level=info msg="StopPodSandbox for \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\"" Sep 6 00:22:25.374572 env[1203]: time="2025-09-06T00:22:25.374553087Z" level=info msg="Container to stop \"7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:25.374607 env[1203]: time="2025-09-06T00:22:25.374573844Z" level=info msg="Container to stop \"d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:25.374607 env[1203]: time="2025-09-06T00:22:25.374587670Z" level=info msg="Container to stop \"0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:25.374607 env[1203]: time="2025-09-06T00:22:25.374600635Z" level=info msg="Container to stop \"345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:25.374696 env[1203]: time="2025-09-06T00:22:25.374612606Z" level=info msg="Container to stop \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:22:25.380085 systemd[1]: cri-containerd-e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44.scope: Deactivated successfully. Sep 6 00:22:25.385765 kubelet[1927]: I0906 00:22:25.385724 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jpxj\" (UniqueName: \"kubernetes.io/projected/53d8eded-292b-461e-8643-d515bfbc050f-kube-api-access-7jpxj\") pod \"53d8eded-292b-461e-8643-d515bfbc050f\" (UID: \"53d8eded-292b-461e-8643-d515bfbc050f\") " Sep 6 00:22:25.385765 kubelet[1927]: I0906 00:22:25.385764 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53d8eded-292b-461e-8643-d515bfbc050f-cilium-config-path\") pod \"53d8eded-292b-461e-8643-d515bfbc050f\" (UID: \"53d8eded-292b-461e-8643-d515bfbc050f\") " Sep 6 00:22:25.388364 kubelet[1927]: I0906 00:22:25.388309 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53d8eded-292b-461e-8643-d515bfbc050f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "53d8eded-292b-461e-8643-d515bfbc050f" (UID: "53d8eded-292b-461e-8643-d515bfbc050f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:22:25.395773 kubelet[1927]: I0906 00:22:25.395696 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d8eded-292b-461e-8643-d515bfbc050f-kube-api-access-7jpxj" (OuterVolumeSpecName: "kube-api-access-7jpxj") pod "53d8eded-292b-461e-8643-d515bfbc050f" (UID: "53d8eded-292b-461e-8643-d515bfbc050f"). InnerVolumeSpecName "kube-api-access-7jpxj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:22:25.408892 env[1203]: time="2025-09-06T00:22:25.408844660Z" level=info msg="shim disconnected" id=e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44 Sep 6 00:22:25.409196 env[1203]: time="2025-09-06T00:22:25.409163610Z" level=warning msg="cleaning up after shim disconnected" id=e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44 namespace=k8s.io Sep 6 00:22:25.409196 env[1203]: time="2025-09-06T00:22:25.409181031Z" level=info msg="cleaning up dead shim" Sep 6 00:22:25.416326 env[1203]: time="2025-09-06T00:22:25.416266133Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3690 runtime=io.containerd.runc.v2\n" Sep 6 00:22:25.417009 env[1203]: time="2025-09-06T00:22:25.416979890Z" level=info msg="TearDown network for sandbox \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\" successfully" Sep 6 00:22:25.417009 env[1203]: time="2025-09-06T00:22:25.417005247Z" level=info msg="StopPodSandbox for \"e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44\" returns successfully" Sep 6 00:22:25.486460 kubelet[1927]: I0906 00:22:25.486415 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a84b39d-750f-4000-bfd2-cce783e628fe-cilium-config-path\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.486691 kubelet[1927]: I0906 00:22:25.486477 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-bpf-maps\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.486691 kubelet[1927]: I0906 00:22:25.486494 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-etc-cni-netd\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.486691 kubelet[1927]: I0906 00:22:25.486507 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-lib-modules\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.486691 kubelet[1927]: I0906 00:22:25.486521 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-xtables-lock\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.486691 kubelet[1927]: I0906 00:22:25.486544 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqw4j\" (UniqueName: \"kubernetes.io/projected/4a84b39d-750f-4000-bfd2-cce783e628fe-kube-api-access-gqw4j\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.486691 kubelet[1927]: I0906 00:22:25.486557 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-cilium-cgroup\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.486837 kubelet[1927]: I0906 00:22:25.486573 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4a84b39d-750f-4000-bfd2-cce783e628fe-clustermesh-secrets\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.486837 kubelet[1927]: I0906 00:22:25.486588 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-cni-path\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.486837 kubelet[1927]: I0906 00:22:25.486602 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4a84b39d-750f-4000-bfd2-cce783e628fe-hubble-tls\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.486837 kubelet[1927]: I0906 00:22:25.486579 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:25.486837 kubelet[1927]: I0906 00:22:25.486614 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-cilium-run\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.486837 kubelet[1927]: I0906 00:22:25.486644 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:25.487002 kubelet[1927]: I0906 00:22:25.486670 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:25.487002 kubelet[1927]: I0906 00:22:25.486691 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:25.487002 kubelet[1927]: I0906 00:22:25.486695 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-hostproc\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.487002 kubelet[1927]: I0906 00:22:25.486716 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-hostproc" (OuterVolumeSpecName: "hostproc") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:25.487102 kubelet[1927]: I0906 00:22:25.487087 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:25.487129 kubelet[1927]: I0906 00:22:25.487110 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:25.487129 kubelet[1927]: I0906 00:22:25.487124 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-cni-path" (OuterVolumeSpecName: "cni-path") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:25.489374 kubelet[1927]: I0906 00:22:25.487320 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-host-proc-sys-net\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.489374 kubelet[1927]: I0906 00:22:25.487367 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-host-proc-sys-kernel\") pod \"4a84b39d-750f-4000-bfd2-cce783e628fe\" (UID: \"4a84b39d-750f-4000-bfd2-cce783e628fe\") " Sep 6 00:22:25.489374 kubelet[1927]: I0906 00:22:25.487400 1927 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.489374 kubelet[1927]: I0906 00:22:25.487411 1927 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.489374 kubelet[1927]: I0906 00:22:25.487419 1927 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.489374 kubelet[1927]: I0906 00:22:25.487449 1927 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jpxj\" (UniqueName: \"kubernetes.io/projected/53d8eded-292b-461e-8643-d515bfbc050f-kube-api-access-7jpxj\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.489374 kubelet[1927]: I0906 00:22:25.487456 1927 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.489374 kubelet[1927]: I0906 00:22:25.487465 1927 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.489620 kubelet[1927]: I0906 00:22:25.487471 1927 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.489620 kubelet[1927]: I0906 00:22:25.487480 1927 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53d8eded-292b-461e-8643-d515bfbc050f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.489620 kubelet[1927]: I0906 00:22:25.487487 1927 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.489620 kubelet[1927]: I0906 00:22:25.487494 1927 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.489620 kubelet[1927]: I0906 00:22:25.487515 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:25.489620 kubelet[1927]: I0906 00:22:25.487536 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:25.489771 kubelet[1927]: I0906 00:22:25.488506 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a84b39d-750f-4000-bfd2-cce783e628fe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:22:25.490242 kubelet[1927]: I0906 00:22:25.490211 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a84b39d-750f-4000-bfd2-cce783e628fe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:22:25.490418 kubelet[1927]: I0906 00:22:25.490288 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a84b39d-750f-4000-bfd2-cce783e628fe-kube-api-access-gqw4j" (OuterVolumeSpecName: "kube-api-access-gqw4j") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "kube-api-access-gqw4j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:22:25.490418 kubelet[1927]: I0906 00:22:25.490354 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a84b39d-750f-4000-bfd2-cce783e628fe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4a84b39d-750f-4000-bfd2-cce783e628fe" (UID: "4a84b39d-750f-4000-bfd2-cce783e628fe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:22:25.588254 kubelet[1927]: I0906 00:22:25.588118 1927 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.588254 kubelet[1927]: I0906 00:22:25.588146 1927 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4a84b39d-750f-4000-bfd2-cce783e628fe-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.588254 kubelet[1927]: I0906 00:22:25.588155 1927 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a84b39d-750f-4000-bfd2-cce783e628fe-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.588254 kubelet[1927]: I0906 00:22:25.588163 1927 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gqw4j\" (UniqueName: \"kubernetes.io/projected/4a84b39d-750f-4000-bfd2-cce783e628fe-kube-api-access-gqw4j\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.588254 kubelet[1927]: I0906 00:22:25.588171 1927 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4a84b39d-750f-4000-bfd2-cce783e628fe-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.588254 kubelet[1927]: I0906 00:22:25.588178 1927 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4a84b39d-750f-4000-bfd2-cce783e628fe-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:25.993876 kubelet[1927]: I0906 00:22:25.993827 1927 scope.go:117] "RemoveContainer" containerID="fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050" Sep 6 00:22:25.995247 env[1203]: time="2025-09-06T00:22:25.995194569Z" level=info msg="RemoveContainer for \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\"" Sep 6 00:22:25.997089 systemd[1]: Removed slice kubepods-burstable-pod4a84b39d_750f_4000_bfd2_cce783e628fe.slice. Sep 6 00:22:25.997164 systemd[1]: kubepods-burstable-pod4a84b39d_750f_4000_bfd2_cce783e628fe.slice: Consumed 6.172s CPU time. Sep 6 00:22:25.999081 env[1203]: time="2025-09-06T00:22:25.999035921Z" level=info msg="RemoveContainer for \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\" returns successfully" Sep 6 00:22:25.999377 kubelet[1927]: I0906 00:22:25.999305 1927 scope.go:117] "RemoveContainer" containerID="345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7" Sep 6 00:22:26.001217 env[1203]: time="2025-09-06T00:22:26.001179470Z" level=info msg="RemoveContainer for \"345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7\"" Sep 6 00:22:26.001272 systemd[1]: Removed slice kubepods-besteffort-pod53d8eded_292b_461e_8643_d515bfbc050f.slice. Sep 6 00:22:26.005299 env[1203]: time="2025-09-06T00:22:26.005261592Z" level=info msg="RemoveContainer for \"345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7\" returns successfully" Sep 6 00:22:26.005466 kubelet[1927]: I0906 00:22:26.005432 1927 scope.go:117] "RemoveContainer" containerID="0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec" Sep 6 00:22:26.006532 env[1203]: time="2025-09-06T00:22:26.006505745Z" level=info msg="RemoveContainer for \"0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec\"" Sep 6 00:22:26.010128 env[1203]: time="2025-09-06T00:22:26.009869839Z" level=info msg="RemoveContainer for \"0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec\" returns successfully" Sep 6 00:22:26.010221 kubelet[1927]: I0906 00:22:26.010156 1927 scope.go:117] "RemoveContainer" containerID="d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0" Sep 6 00:22:26.011629 env[1203]: time="2025-09-06T00:22:26.011574733Z" level=info msg="RemoveContainer for \"d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0\"" Sep 6 00:22:26.014560 env[1203]: time="2025-09-06T00:22:26.014528549Z" level=info msg="RemoveContainer for \"d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0\" returns successfully" Sep 6 00:22:26.014875 kubelet[1927]: I0906 00:22:26.014850 1927 scope.go:117] "RemoveContainer" containerID="7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96" Sep 6 00:22:26.015840 env[1203]: time="2025-09-06T00:22:26.015789182Z" level=info msg="RemoveContainer for \"7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96\"" Sep 6 00:22:26.019565 env[1203]: time="2025-09-06T00:22:26.019507341Z" level=info msg="RemoveContainer for \"7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96\" returns successfully" Sep 6 00:22:26.019707 kubelet[1927]: I0906 00:22:26.019665 1927 scope.go:117] "RemoveContainer" containerID="fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050" Sep 6 00:22:26.019975 env[1203]: time="2025-09-06T00:22:26.019888707Z" level=error msg="ContainerStatus for \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\": not found" Sep 6 00:22:26.020114 kubelet[1927]: E0906 00:22:26.020076 1927 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\": not found" containerID="fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050" Sep 6 00:22:26.020168 kubelet[1927]: I0906 00:22:26.020111 1927 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050"} err="failed to get container status \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc2b261b3dba297ab2f150c7c177aabbc6c726e4eaa21abdc2475b6872266050\": not found" Sep 6 00:22:26.020168 kubelet[1927]: I0906 00:22:26.020148 1927 scope.go:117] "RemoveContainer" containerID="345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7" Sep 6 00:22:26.020377 env[1203]: time="2025-09-06T00:22:26.020309365Z" level=error msg="ContainerStatus for \"345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7\": not found" Sep 6 00:22:26.020507 kubelet[1927]: E0906 00:22:26.020483 1927 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7\": not found" containerID="345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7" Sep 6 00:22:26.020561 kubelet[1927]: I0906 00:22:26.020514 1927 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7"} err="failed to get container status \"345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"345001833a3db4831c401af40efe590eac56dda5e0824c97009517e85aa9f0b7\": not found" Sep 6 00:22:26.020561 kubelet[1927]: I0906 00:22:26.020535 1927 scope.go:117] "RemoveContainer" containerID="0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec" Sep 6 00:22:26.020759 env[1203]: time="2025-09-06T00:22:26.020703795Z" level=error msg="ContainerStatus for \"0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec\": not found" Sep 6 00:22:26.020886 kubelet[1927]: E0906 00:22:26.020862 1927 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec\": not found" containerID="0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec" Sep 6 00:22:26.020969 kubelet[1927]: I0906 00:22:26.020887 1927 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec"} err="failed to get container status \"0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"0cce1160cf2cefd8a10b2bfb959702120d6a41c6cf23e04c6bae1cbd797b49ec\": not found" Sep 6 00:22:26.020969 kubelet[1927]: I0906 00:22:26.020902 1927 scope.go:117] "RemoveContainer" containerID="d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0" Sep 6 00:22:26.021117 env[1203]: time="2025-09-06T00:22:26.021082797Z" level=error msg="ContainerStatus for \"d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0\": not found" Sep 6 00:22:26.021485 kubelet[1927]: E0906 00:22:26.021461 1927 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0\": not found" containerID="d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0" Sep 6 00:22:26.021563 kubelet[1927]: I0906 00:22:26.021484 1927 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0"} err="failed to get container status \"d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7b786fdccd3a66d1b2267526ab06813368c9192da37683fe3e57b31ce60efa0\": not found" Sep 6 00:22:26.021563 kubelet[1927]: I0906 00:22:26.021497 1927 scope.go:117] "RemoveContainer" containerID="7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96" Sep 6 00:22:26.021701 env[1203]: time="2025-09-06T00:22:26.021662028Z" level=error msg="ContainerStatus for \"7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96\": not found" Sep 6 00:22:26.021842 kubelet[1927]: E0906 00:22:26.021789 1927 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96\": not found" containerID="7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96" Sep 6 00:22:26.021952 kubelet[1927]: I0906 00:22:26.021842 1927 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96"} err="failed to get container status \"7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96\": rpc error: code = NotFound desc = an error occurred when try to find container \"7bab1729fac01f63cb5fbc3e8ec3ed3869de44fe0d0fe35122a884e658219a96\": not found" Sep 6 00:22:26.021952 kubelet[1927]: I0906 00:22:26.021855 1927 scope.go:117] "RemoveContainer" containerID="bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f" Sep 6 00:22:26.022745 env[1203]: time="2025-09-06T00:22:26.022717962Z" level=info msg="RemoveContainer for \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\"" Sep 6 00:22:26.025682 env[1203]: time="2025-09-06T00:22:26.025643996Z" level=info msg="RemoveContainer for \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\" returns successfully" Sep 6 00:22:26.025795 kubelet[1927]: I0906 00:22:26.025769 1927 scope.go:117] "RemoveContainer" containerID="bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f" Sep 6 00:22:26.026005 env[1203]: time="2025-09-06T00:22:26.025962856Z" level=error msg="ContainerStatus for \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\": not found" Sep 6 00:22:26.026119 kubelet[1927]: E0906 00:22:26.026085 1927 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\": not found" containerID="bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f" Sep 6 00:22:26.026165 kubelet[1927]: I0906 00:22:26.026122 1927 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f"} err="failed to get container status \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb7352bce34175952f9d6937923a274ac9ada44d41fdab286017a057d91e247f\": not found" Sep 6 00:22:26.251292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44-rootfs.mount: Deactivated successfully. Sep 6 00:22:26.251392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e98242aaf98d7db9aae9bf3ca93f180d51631405661f678c2aa07ebe00938a44-shm.mount: Deactivated successfully. Sep 6 00:22:26.251444 systemd[1]: var-lib-kubelet-pods-4a84b39d\x2d750f\x2d4000\x2dbfd2\x2dcce783e628fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgqw4j.mount: Deactivated successfully. Sep 6 00:22:26.251501 systemd[1]: var-lib-kubelet-pods-53d8eded\x2d292b\x2d461e\x2d8643\x2dd515bfbc050f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7jpxj.mount: Deactivated successfully. Sep 6 00:22:26.251552 systemd[1]: var-lib-kubelet-pods-4a84b39d\x2d750f\x2d4000\x2dbfd2\x2dcce783e628fe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:22:26.251602 systemd[1]: var-lib-kubelet-pods-4a84b39d\x2d750f\x2d4000\x2dbfd2\x2dcce783e628fe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:22:26.686465 sshd[3548]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:26.689816 systemd[1]: Started sshd@22-10.0.0.108:22-10.0.0.1:46146.service. Sep 6 00:22:26.690311 systemd[1]: sshd@21-10.0.0.108:22-10.0.0.1:46144.service: Deactivated successfully. Sep 6 00:22:26.690880 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:22:26.691433 systemd-logind[1191]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:22:26.692332 systemd-logind[1191]: Removed session 22. Sep 6 00:22:26.726056 sshd[3706]: Accepted publickey for core from 10.0.0.1 port 46146 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:26.727155 sshd[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:26.730375 systemd-logind[1191]: New session 23 of user core. Sep 6 00:22:26.731134 systemd[1]: Started session-23.scope. Sep 6 00:22:26.829305 kubelet[1927]: E0906 00:22:26.829245 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:26.831075 kubelet[1927]: I0906 00:22:26.831032 1927 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a84b39d-750f-4000-bfd2-cce783e628fe" path="/var/lib/kubelet/pods/4a84b39d-750f-4000-bfd2-cce783e628fe/volumes" Sep 6 00:22:26.831573 kubelet[1927]: I0906 00:22:26.831542 1927 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53d8eded-292b-461e-8643-d515bfbc050f" path="/var/lib/kubelet/pods/53d8eded-292b-461e-8643-d515bfbc050f/volumes" Sep 6 00:22:27.119478 sshd[3706]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:27.122552 systemd[1]: sshd@22-10.0.0.108:22-10.0.0.1:46146.service: Deactivated successfully. Sep 6 00:22:27.123150 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:22:27.124999 systemd[1]: Started sshd@23-10.0.0.108:22-10.0.0.1:46162.service. Sep 6 00:22:27.125742 systemd-logind[1191]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:22:27.126684 systemd-logind[1191]: Removed session 23. Sep 6 00:22:27.149356 systemd[1]: Created slice kubepods-burstable-pode8d59c1f_b47e_45ea_b9c7_ac3495d8af8e.slice. Sep 6 00:22:27.161907 sshd[3719]: Accepted publickey for core from 10.0.0.1 port 46162 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:27.163184 sshd[3719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:27.168012 systemd[1]: Started session-24.scope. Sep 6 00:22:27.168168 systemd-logind[1191]: New session 24 of user core. Sep 6 00:22:27.196760 kubelet[1927]: I0906 00:22:27.196709 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-bpf-maps\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.196760 kubelet[1927]: I0906 00:22:27.196742 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-hostproc\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.196760 kubelet[1927]: I0906 00:22:27.196759 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-xtables-lock\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.196900 kubelet[1927]: I0906 00:22:27.196777 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brh4\" (UniqueName: \"kubernetes.io/projected/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-kube-api-access-4brh4\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.196900 kubelet[1927]: I0906 00:22:27.196792 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-run\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.196900 kubelet[1927]: I0906 00:22:27.196807 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-hubble-tls\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.197006 kubelet[1927]: I0906 00:22:27.196925 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-host-proc-sys-net\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.197046 kubelet[1927]: I0906 00:22:27.197017 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-etc-cni-netd\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.197074 kubelet[1927]: I0906 00:22:27.197060 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-host-proc-sys-kernel\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.197104 kubelet[1927]: I0906 00:22:27.197085 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cni-path\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.197104 kubelet[1927]: I0906 00:22:27.197098 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-lib-modules\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.197158 kubelet[1927]: I0906 00:22:27.197110 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-ipsec-secrets\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.197158 kubelet[1927]: I0906 00:22:27.197127 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-cgroup\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.197158 kubelet[1927]: I0906 00:22:27.197143 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-clustermesh-secrets\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.197227 kubelet[1927]: I0906 00:22:27.197188 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-config-path\") pod \"cilium-srt4w\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " pod="kube-system/cilium-srt4w" Sep 6 00:22:27.295998 sshd[3719]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:27.301358 systemd[1]: Started sshd@24-10.0.0.108:22-10.0.0.1:46176.service. Sep 6 00:22:27.305061 systemd[1]: sshd@23-10.0.0.108:22-10.0.0.1:46162.service: Deactivated successfully. Sep 6 00:22:27.305652 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:22:27.311208 systemd-logind[1191]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:22:27.313210 systemd-logind[1191]: Removed session 24. Sep 6 00:22:27.314426 kubelet[1927]: E0906 00:22:27.314396 1927 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls kube-api-access-4brh4], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-srt4w" podUID="e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" Sep 6 00:22:27.350267 sshd[3734]: Accepted publickey for core from 10.0.0.1 port 46176 ssh2: RSA SHA256:NDoKIkufV/B1Zx+wYsdCOWsyg9FfoMI5xabqeZGBXwg Sep 6 00:22:27.351529 sshd[3734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:22:27.354865 systemd-logind[1191]: New session 25 of user core. Sep 6 00:22:27.355771 systemd[1]: Started session-25.scope. Sep 6 00:22:28.105424 kubelet[1927]: I0906 00:22:28.105370 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-bpf-maps\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.105424 kubelet[1927]: I0906 00:22:28.105407 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-run\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.105424 kubelet[1927]: I0906 00:22:28.105432 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4brh4\" (UniqueName: \"kubernetes.io/projected/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-kube-api-access-4brh4\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.105874 kubelet[1927]: I0906 00:22:28.105447 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-host-proc-sys-kernel\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.105874 kubelet[1927]: I0906 00:22:28.105464 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-etc-cni-netd\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.105874 kubelet[1927]: I0906 00:22:28.105507 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:28.105874 kubelet[1927]: I0906 00:22:28.105510 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:28.105874 kubelet[1927]: I0906 00:22:28.105533 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:28.106017 kubelet[1927]: I0906 00:22:28.105573 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-hostproc\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.106017 kubelet[1927]: I0906 00:22:28.105590 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-ipsec-secrets\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.106017 kubelet[1927]: I0906 00:22:28.105607 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-config-path\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.106017 kubelet[1927]: I0906 00:22:28.105635 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-xtables-lock\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.106017 kubelet[1927]: I0906 00:22:28.105649 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-lib-modules\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.106017 kubelet[1927]: I0906 00:22:28.105654 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-hostproc" (OuterVolumeSpecName: "hostproc") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:28.106157 kubelet[1927]: I0906 00:22:28.105663 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-hubble-tls\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.106157 kubelet[1927]: I0906 00:22:28.105673 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:28.106157 kubelet[1927]: I0906 00:22:28.105680 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-host-proc-sys-net\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.106157 kubelet[1927]: I0906 00:22:28.105688 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:28.106157 kubelet[1927]: I0906 00:22:28.105696 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cni-path\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.106157 kubelet[1927]: I0906 00:22:28.105711 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-clustermesh-secrets\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.106292 kubelet[1927]: I0906 00:22:28.105725 1927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-cgroup\") pod \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\" (UID: \"e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e\") " Sep 6 00:22:28.106292 kubelet[1927]: I0906 00:22:28.105767 1927 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.106292 kubelet[1927]: I0906 00:22:28.105775 1927 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.106292 kubelet[1927]: I0906 00:22:28.105783 1927 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.106292 kubelet[1927]: I0906 00:22:28.105791 1927 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.106292 kubelet[1927]: I0906 00:22:28.105799 1927 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.106292 kubelet[1927]: I0906 00:22:28.105806 1927 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.106445 kubelet[1927]: I0906 00:22:28.105822 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:28.106445 kubelet[1927]: I0906 00:22:28.105836 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:28.106445 kubelet[1927]: I0906 00:22:28.106009 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cni-path" (OuterVolumeSpecName: "cni-path") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:28.106445 kubelet[1927]: I0906 00:22:28.106030 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:22:28.107801 kubelet[1927]: I0906 00:22:28.107778 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:22:28.107801 kubelet[1927]: I0906 00:22:28.108191 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:22:28.107801 kubelet[1927]: I0906 00:22:28.108210 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-kube-api-access-4brh4" (OuterVolumeSpecName: "kube-api-access-4brh4") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "kube-api-access-4brh4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:22:28.109471 systemd[1]: var-lib-kubelet-pods-e8d59c1f\x2db47e\x2d45ea\x2db9c7\x2dac3495d8af8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4brh4.mount: Deactivated successfully. Sep 6 00:22:28.109570 systemd[1]: var-lib-kubelet-pods-e8d59c1f\x2db47e\x2d45ea\x2db9c7\x2dac3495d8af8e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:22:28.110075 kubelet[1927]: I0906 00:22:28.110055 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:22:28.110217 kubelet[1927]: I0906 00:22:28.110199 1927 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" (UID: "e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:22:28.206461 kubelet[1927]: I0906 00:22:28.206411 1927 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.206461 kubelet[1927]: I0906 00:22:28.206442 1927 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.206461 kubelet[1927]: I0906 00:22:28.206450 1927 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.206461 kubelet[1927]: I0906 00:22:28.206460 1927 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.206660 kubelet[1927]: I0906 00:22:28.206493 1927 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.206660 kubelet[1927]: I0906 00:22:28.206501 1927 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.206660 kubelet[1927]: I0906 00:22:28.206508 1927 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4brh4\" (UniqueName: \"kubernetes.io/projected/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-kube-api-access-4brh4\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.206660 kubelet[1927]: I0906 00:22:28.206516 1927 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.206660 kubelet[1927]: I0906 00:22:28.206523 1927 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:22:28.303475 systemd[1]: var-lib-kubelet-pods-e8d59c1f\x2db47e\x2d45ea\x2db9c7\x2dac3495d8af8e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:22:28.303584 systemd[1]: var-lib-kubelet-pods-e8d59c1f\x2db47e\x2d45ea\x2db9c7\x2dac3495d8af8e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:22:28.833526 systemd[1]: Removed slice kubepods-burstable-pode8d59c1f_b47e_45ea_b9c7_ac3495d8af8e.slice. Sep 6 00:22:28.868073 kubelet[1927]: E0906 00:22:28.868034 1927 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:22:29.040597 systemd[1]: Created slice kubepods-burstable-podd3dd68ee_92a9_48bb_81fd_6ac51e782c06.slice. Sep 6 00:22:29.111061 kubelet[1927]: I0906 00:22:29.111000 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-hostproc\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111061 kubelet[1927]: I0906 00:22:29.111058 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-host-proc-sys-net\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111448 kubelet[1927]: I0906 00:22:29.111082 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-bpf-maps\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111448 kubelet[1927]: I0906 00:22:29.111107 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-etc-cni-netd\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111448 kubelet[1927]: I0906 00:22:29.111132 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-host-proc-sys-kernel\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111448 kubelet[1927]: I0906 00:22:29.111153 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmhkj\" (UniqueName: \"kubernetes.io/projected/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-kube-api-access-hmhkj\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111448 kubelet[1927]: I0906 00:22:29.111173 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-cni-path\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111448 kubelet[1927]: I0906 00:22:29.111197 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-xtables-lock\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111620 kubelet[1927]: I0906 00:22:29.111220 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-hubble-tls\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111620 kubelet[1927]: I0906 00:22:29.111299 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-cilium-config-path\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111620 kubelet[1927]: I0906 00:22:29.111339 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-clustermesh-secrets\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111620 kubelet[1927]: I0906 00:22:29.111356 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-cilium-run\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111620 kubelet[1927]: I0906 00:22:29.111368 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-cilium-ipsec-secrets\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111620 kubelet[1927]: I0906 00:22:29.111385 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-cilium-cgroup\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.111767 kubelet[1927]: I0906 00:22:29.111396 1927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3dd68ee-92a9-48bb-81fd-6ac51e782c06-lib-modules\") pod \"cilium-bsrgv\" (UID: \"d3dd68ee-92a9-48bb-81fd-6ac51e782c06\") " pod="kube-system/cilium-bsrgv" Sep 6 00:22:29.344208 kubelet[1927]: E0906 00:22:29.344153 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:29.344728 env[1203]: time="2025-09-06T00:22:29.344680668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bsrgv,Uid:d3dd68ee-92a9-48bb-81fd-6ac51e782c06,Namespace:kube-system,Attempt:0,}" Sep 6 00:22:29.356885 env[1203]: time="2025-09-06T00:22:29.356819173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:22:29.356885 env[1203]: time="2025-09-06T00:22:29.356862864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:22:29.356885 env[1203]: time="2025-09-06T00:22:29.356878063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:22:29.357107 env[1203]: time="2025-09-06T00:22:29.357057006Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a356c753e57ef78ffb4a36b3b925f7c0d49343831d73769ab2a78aedd0e48c6 pid=3763 runtime=io.containerd.runc.v2 Sep 6 00:22:29.368463 systemd[1]: Started cri-containerd-7a356c753e57ef78ffb4a36b3b925f7c0d49343831d73769ab2a78aedd0e48c6.scope. Sep 6 00:22:29.385615 env[1203]: time="2025-09-06T00:22:29.385578418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bsrgv,Uid:d3dd68ee-92a9-48bb-81fd-6ac51e782c06,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a356c753e57ef78ffb4a36b3b925f7c0d49343831d73769ab2a78aedd0e48c6\"" Sep 6 00:22:29.386176 kubelet[1927]: E0906 00:22:29.386152 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:29.392808 env[1203]: time="2025-09-06T00:22:29.392759345Z" level=info msg="CreateContainer within sandbox \"7a356c753e57ef78ffb4a36b3b925f7c0d49343831d73769ab2a78aedd0e48c6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:22:29.403447 env[1203]: time="2025-09-06T00:22:29.403399271Z" level=info msg="CreateContainer within sandbox \"7a356c753e57ef78ffb4a36b3b925f7c0d49343831d73769ab2a78aedd0e48c6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e650c177539d9ab71ecd0748724362a5ae672e255fffb6d8c7428ab28fff3c10\"" Sep 6 00:22:29.403839 env[1203]: time="2025-09-06T00:22:29.403808693Z" level=info msg="StartContainer for \"e650c177539d9ab71ecd0748724362a5ae672e255fffb6d8c7428ab28fff3c10\"" Sep 6 00:22:29.416902 systemd[1]: Started cri-containerd-e650c177539d9ab71ecd0748724362a5ae672e255fffb6d8c7428ab28fff3c10.scope. Sep 6 00:22:29.441617 env[1203]: time="2025-09-06T00:22:29.441568923Z" level=info msg="StartContainer for \"e650c177539d9ab71ecd0748724362a5ae672e255fffb6d8c7428ab28fff3c10\" returns successfully" Sep 6 00:22:29.449925 systemd[1]: cri-containerd-e650c177539d9ab71ecd0748724362a5ae672e255fffb6d8c7428ab28fff3c10.scope: Deactivated successfully. Sep 6 00:22:29.476562 env[1203]: time="2025-09-06T00:22:29.476493184Z" level=info msg="shim disconnected" id=e650c177539d9ab71ecd0748724362a5ae672e255fffb6d8c7428ab28fff3c10 Sep 6 00:22:29.476562 env[1203]: time="2025-09-06T00:22:29.476542585Z" level=warning msg="cleaning up after shim disconnected" id=e650c177539d9ab71ecd0748724362a5ae672e255fffb6d8c7428ab28fff3c10 namespace=k8s.io Sep 6 00:22:29.476562 env[1203]: time="2025-09-06T00:22:29.476550751Z" level=info msg="cleaning up dead shim" Sep 6 00:22:29.485083 env[1203]: time="2025-09-06T00:22:29.485041226Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3845 runtime=io.containerd.runc.v2\n" Sep 6 00:22:30.006608 kubelet[1927]: E0906 00:22:30.006569 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:30.344020 env[1203]: time="2025-09-06T00:22:30.343856453Z" level=info msg="CreateContainer within sandbox \"7a356c753e57ef78ffb4a36b3b925f7c0d49343831d73769ab2a78aedd0e48c6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:22:30.412161 env[1203]: time="2025-09-06T00:22:30.412073325Z" level=info msg="CreateContainer within sandbox \"7a356c753e57ef78ffb4a36b3b925f7c0d49343831d73769ab2a78aedd0e48c6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"78e007989be134dc943cb0032e3f9ff1b8b71ac55906a21bcc9483470fb01be4\"" Sep 6 00:22:30.412762 env[1203]: time="2025-09-06T00:22:30.412699994Z" level=info msg="StartContainer for \"78e007989be134dc943cb0032e3f9ff1b8b71ac55906a21bcc9483470fb01be4\"" Sep 6 00:22:30.430087 systemd[1]: Started cri-containerd-78e007989be134dc943cb0032e3f9ff1b8b71ac55906a21bcc9483470fb01be4.scope. Sep 6 00:22:30.459944 systemd[1]: cri-containerd-78e007989be134dc943cb0032e3f9ff1b8b71ac55906a21bcc9483470fb01be4.scope: Deactivated successfully. Sep 6 00:22:30.556247 env[1203]: time="2025-09-06T00:22:30.556188777Z" level=info msg="StartContainer for \"78e007989be134dc943cb0032e3f9ff1b8b71ac55906a21bcc9483470fb01be4\" returns successfully" Sep 6 00:22:30.567448 kubelet[1927]: I0906 00:22:30.567384 1927 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:22:30Z","lastTransitionTime":"2025-09-06T00:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:22:30.717623 env[1203]: time="2025-09-06T00:22:30.717570731Z" level=info msg="shim disconnected" id=78e007989be134dc943cb0032e3f9ff1b8b71ac55906a21bcc9483470fb01be4 Sep 6 00:22:30.717623 env[1203]: time="2025-09-06T00:22:30.717621055Z" level=warning msg="cleaning up after shim disconnected" id=78e007989be134dc943cb0032e3f9ff1b8b71ac55906a21bcc9483470fb01be4 namespace=k8s.io Sep 6 00:22:30.717623 env[1203]: time="2025-09-06T00:22:30.717631184Z" level=info msg="cleaning up dead shim" Sep 6 00:22:30.724200 env[1203]: time="2025-09-06T00:22:30.724165160Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3907 runtime=io.containerd.runc.v2\n" Sep 6 00:22:30.831036 kubelet[1927]: I0906 00:22:30.830985 1927 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e" path="/var/lib/kubelet/pods/e8d59c1f-b47e-45ea-b9c7-ac3495d8af8e/volumes" Sep 6 00:22:31.009609 kubelet[1927]: E0906 00:22:31.009089 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:31.013598 env[1203]: time="2025-09-06T00:22:31.013555007Z" level=info msg="CreateContainer within sandbox \"7a356c753e57ef78ffb4a36b3b925f7c0d49343831d73769ab2a78aedd0e48c6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:22:31.026845 env[1203]: time="2025-09-06T00:22:31.026802488Z" level=info msg="CreateContainer within sandbox \"7a356c753e57ef78ffb4a36b3b925f7c0d49343831d73769ab2a78aedd0e48c6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c039f00d46073dd645cbac5c900ac15d7b1160db4ae2b75aab15eb3b9d0e75c0\"" Sep 6 00:22:31.027325 env[1203]: time="2025-09-06T00:22:31.027267697Z" level=info msg="StartContainer for \"c039f00d46073dd645cbac5c900ac15d7b1160db4ae2b75aab15eb3b9d0e75c0\"" Sep 6 00:22:31.041195 systemd[1]: Started cri-containerd-c039f00d46073dd645cbac5c900ac15d7b1160db4ae2b75aab15eb3b9d0e75c0.scope. Sep 6 00:22:31.068939 env[1203]: time="2025-09-06T00:22:31.068887508Z" level=info msg="StartContainer for \"c039f00d46073dd645cbac5c900ac15d7b1160db4ae2b75aab15eb3b9d0e75c0\" returns successfully" Sep 6 00:22:31.073877 systemd[1]: cri-containerd-c039f00d46073dd645cbac5c900ac15d7b1160db4ae2b75aab15eb3b9d0e75c0.scope: Deactivated successfully. Sep 6 00:22:31.095351 env[1203]: time="2025-09-06T00:22:31.095301499Z" level=info msg="shim disconnected" id=c039f00d46073dd645cbac5c900ac15d7b1160db4ae2b75aab15eb3b9d0e75c0 Sep 6 00:22:31.095351 env[1203]: time="2025-09-06T00:22:31.095347224Z" level=warning msg="cleaning up after shim disconnected" id=c039f00d46073dd645cbac5c900ac15d7b1160db4ae2b75aab15eb3b9d0e75c0 namespace=k8s.io Sep 6 00:22:31.095650 env[1203]: time="2025-09-06T00:22:31.095355690Z" level=info msg="cleaning up dead shim" Sep 6 00:22:31.101299 env[1203]: time="2025-09-06T00:22:31.101267265Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3964 runtime=io.containerd.runc.v2\n" Sep 6 00:22:31.353950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78e007989be134dc943cb0032e3f9ff1b8b71ac55906a21bcc9483470fb01be4-rootfs.mount: Deactivated successfully. Sep 6 00:22:32.012608 kubelet[1927]: E0906 00:22:32.012569 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:32.017953 env[1203]: time="2025-09-06T00:22:32.016955775Z" level=info msg="CreateContainer within sandbox \"7a356c753e57ef78ffb4a36b3b925f7c0d49343831d73769ab2a78aedd0e48c6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:22:32.027677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount194063322.mount: Deactivated successfully. Sep 6 00:22:32.029939 env[1203]: time="2025-09-06T00:22:32.029867563Z" level=info msg="CreateContainer within sandbox \"7a356c753e57ef78ffb4a36b3b925f7c0d49343831d73769ab2a78aedd0e48c6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d8295d535c44f0c8bc0afb37405a60b3a67cee0f7e4c3d9f9dbaf9263a4443a2\"" Sep 6 00:22:32.030592 env[1203]: time="2025-09-06T00:22:32.030543948Z" level=info msg="StartContainer for \"d8295d535c44f0c8bc0afb37405a60b3a67cee0f7e4c3d9f9dbaf9263a4443a2\"" Sep 6 00:22:32.051018 systemd[1]: Started cri-containerd-d8295d535c44f0c8bc0afb37405a60b3a67cee0f7e4c3d9f9dbaf9263a4443a2.scope. Sep 6 00:22:32.073658 systemd[1]: cri-containerd-d8295d535c44f0c8bc0afb37405a60b3a67cee0f7e4c3d9f9dbaf9263a4443a2.scope: Deactivated successfully. Sep 6 00:22:32.074406 env[1203]: time="2025-09-06T00:22:32.074180130Z" level=info msg="StartContainer for \"d8295d535c44f0c8bc0afb37405a60b3a67cee0f7e4c3d9f9dbaf9263a4443a2\" returns successfully" Sep 6 00:22:32.094780 env[1203]: time="2025-09-06T00:22:32.094725403Z" level=info msg="shim disconnected" id=d8295d535c44f0c8bc0afb37405a60b3a67cee0f7e4c3d9f9dbaf9263a4443a2 Sep 6 00:22:32.094959 env[1203]: time="2025-09-06T00:22:32.094782840Z" level=warning msg="cleaning up after shim disconnected" id=d8295d535c44f0c8bc0afb37405a60b3a67cee0f7e4c3d9f9dbaf9263a4443a2 namespace=k8s.io Sep 6 00:22:32.094959 env[1203]: time="2025-09-06T00:22:32.094791787Z" level=info msg="cleaning up dead shim" Sep 6 00:22:32.101373 env[1203]: time="2025-09-06T00:22:32.101350553Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:22:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4017 runtime=io.containerd.runc.v2\n" Sep 6 00:22:32.353865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8295d535c44f0c8bc0afb37405a60b3a67cee0f7e4c3d9f9dbaf9263a4443a2-rootfs.mount: Deactivated successfully. Sep 6 00:22:33.017199 kubelet[1927]: E0906 00:22:33.017163 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:33.021211 env[1203]: time="2025-09-06T00:22:33.021162765Z" level=info msg="CreateContainer within sandbox \"7a356c753e57ef78ffb4a36b3b925f7c0d49343831d73769ab2a78aedd0e48c6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:22:33.034027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount275559057.mount: Deactivated successfully. Sep 6 00:22:33.035814 env[1203]: time="2025-09-06T00:22:33.035764971Z" level=info msg="CreateContainer within sandbox \"7a356c753e57ef78ffb4a36b3b925f7c0d49343831d73769ab2a78aedd0e48c6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"792969ddbf2567ba60d8c9f8b6897ef3394649a308620fda87bea19a2daa717c\"" Sep 6 00:22:33.036219 env[1203]: time="2025-09-06T00:22:33.036194076Z" level=info msg="StartContainer for \"792969ddbf2567ba60d8c9f8b6897ef3394649a308620fda87bea19a2daa717c\"" Sep 6 00:22:33.052343 systemd[1]: Started cri-containerd-792969ddbf2567ba60d8c9f8b6897ef3394649a308620fda87bea19a2daa717c.scope. Sep 6 00:22:33.075197 env[1203]: time="2025-09-06T00:22:33.075151805Z" level=info msg="StartContainer for \"792969ddbf2567ba60d8c9f8b6897ef3394649a308620fda87bea19a2daa717c\" returns successfully" Sep 6 00:22:33.345947 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 00:22:34.021552 kubelet[1927]: E0906 00:22:34.021515 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:34.033964 kubelet[1927]: I0906 00:22:34.033880 1927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bsrgv" podStartSLOduration=5.033865089 podStartE2EDuration="5.033865089s" podCreationTimestamp="2025-09-06 00:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:22:34.033398072 +0000 UTC m=+85.308376966" watchObservedRunningTime="2025-09-06 00:22:34.033865089 +0000 UTC m=+85.308843983" Sep 6 00:22:34.829015 kubelet[1927]: E0906 00:22:34.828973 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:35.345207 kubelet[1927]: E0906 00:22:35.345166 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:36.022077 systemd-networkd[1025]: lxc_health: Link UP Sep 6 00:22:36.034037 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:22:36.034087 systemd-networkd[1025]: lxc_health: Gained carrier Sep 6 00:22:37.345815 kubelet[1927]: E0906 00:22:37.345781 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:37.650467 systemd-networkd[1025]: lxc_health: Gained IPv6LL Sep 6 00:22:38.034287 kubelet[1927]: E0906 00:22:38.034166 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:39.036046 kubelet[1927]: E0906 00:22:39.036006 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:41.829171 kubelet[1927]: E0906 00:22:41.829136 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:41.829772 kubelet[1927]: E0906 00:22:41.829287 1927 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:22:41.971627 systemd[1]: run-containerd-runc-k8s.io-792969ddbf2567ba60d8c9f8b6897ef3394649a308620fda87bea19a2daa717c-runc.9Fdgxi.mount: Deactivated successfully. Sep 6 00:22:42.020135 sshd[3734]: pam_unix(sshd:session): session closed for user core Sep 6 00:22:42.022723 systemd[1]: sshd@24-10.0.0.108:22-10.0.0.1:46176.service: Deactivated successfully. Sep 6 00:22:42.023487 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:22:42.024051 systemd-logind[1191]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:22:42.024734 systemd-logind[1191]: Removed session 25.