Sep 10 00:50:25.922131 kernel: Linux version 5.15.191-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Sep 9 23:10:34 -00 2025 Sep 10 00:50:25.922155 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:50:25.922163 kernel: BIOS-provided physical RAM map: Sep 10 00:50:25.922169 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 10 00:50:25.922174 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 10 00:50:25.922192 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 10 00:50:25.922201 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 10 00:50:25.922209 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 10 00:50:25.922219 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 00:50:25.922226 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 10 00:50:25.922233 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 10 00:50:25.922241 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 10 00:50:25.922248 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 10 00:50:25.922254 kernel: NX (Execute Disable) protection: active Sep 10 00:50:25.922263 kernel: SMBIOS 2.8 present. Sep 10 00:50:25.922269 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 10 00:50:25.922275 kernel: Hypervisor detected: KVM Sep 10 00:50:25.922295 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 00:50:25.922305 kernel: kvm-clock: cpu 0, msr 9919f001, primary cpu clock Sep 10 00:50:25.922312 kernel: kvm-clock: using sched offset of 3702237034 cycles Sep 10 00:50:25.922318 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 00:50:25.922325 kernel: tsc: Detected 2794.750 MHz processor Sep 10 00:50:25.922331 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 00:50:25.922341 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 00:50:25.922349 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 10 00:50:25.922357 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 00:50:25.922366 kernel: Using GB pages for direct mapping Sep 10 00:50:25.922374 kernel: ACPI: Early table checksum verification disabled Sep 10 00:50:25.922383 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 10 00:50:25.922391 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:50:25.922414 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:50:25.922420 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:50:25.922430 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 10 00:50:25.922438 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:50:25.922446 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:50:25.922455 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:50:25.922463 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:50:25.922471 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 10 00:50:25.922479 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 10 00:50:25.922485 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 10 00:50:25.922512 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 10 00:50:25.922519 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 10 00:50:25.922525 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 10 00:50:25.922532 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 10 00:50:25.922539 kernel: No NUMA configuration found Sep 10 00:50:25.922546 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 10 00:50:25.922554 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 10 00:50:25.922560 kernel: Zone ranges: Sep 10 00:50:25.922567 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 00:50:25.922598 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 10 00:50:25.922605 kernel: Normal empty Sep 10 00:50:25.922611 kernel: Movable zone start for each node Sep 10 00:50:25.922618 kernel: Early memory node ranges Sep 10 00:50:25.922625 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 10 00:50:25.922632 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 10 00:50:25.922641 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 10 00:50:25.922651 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:50:25.922670 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 10 00:50:25.922677 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 10 00:50:25.922684 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 00:50:25.922691 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 00:50:25.922697 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 00:50:25.922711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 00:50:25.922723 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 00:50:25.922730 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 00:50:25.922742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 00:50:25.922749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 00:50:25.922756 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 00:50:25.922762 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 10 00:50:25.922769 kernel: TSC deadline timer available Sep 10 00:50:25.922787 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 10 00:50:25.922795 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 10 00:50:25.922801 kernel: kvm-guest: setup PV sched yield Sep 10 00:50:25.922808 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 10 00:50:25.922817 kernel: Booting paravirtualized kernel on KVM Sep 10 00:50:25.922824 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 00:50:25.922839 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 10 00:50:25.922851 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 10 00:50:25.922857 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 10 00:50:25.922864 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 10 00:50:25.922870 kernel: kvm-guest: setup async PF for cpu 0 Sep 10 00:50:25.922877 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Sep 10 00:50:25.922884 kernel: kvm-guest: PV spinlocks enabled Sep 10 00:50:25.922905 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 00:50:25.922912 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 10 00:50:25.922918 kernel: Policy zone: DMA32 Sep 10 00:50:25.922926 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:50:25.922933 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:50:25.922951 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:50:25.922959 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:50:25.922966 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:50:25.922975 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 134796K reserved, 0K cma-reserved) Sep 10 00:50:25.922982 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:50:25.923000 kernel: ftrace: allocating 34612 entries in 136 pages Sep 10 00:50:25.923008 kernel: ftrace: allocated 136 pages with 2 groups Sep 10 00:50:25.923015 kernel: rcu: Hierarchical RCU implementation. Sep 10 00:50:25.923022 kernel: rcu: RCU event tracing is enabled. Sep 10 00:50:25.923029 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:50:25.923043 kernel: Rude variant of Tasks RCU enabled. Sep 10 00:50:25.923054 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:50:25.923063 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:50:25.923070 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:50:25.923077 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 10 00:50:25.923084 kernel: random: crng init done Sep 10 00:50:25.923091 kernel: Console: colour VGA+ 80x25 Sep 10 00:50:25.923097 kernel: printk: console [ttyS0] enabled Sep 10 00:50:25.923123 kernel: ACPI: Core revision 20210730 Sep 10 00:50:25.923130 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 10 00:50:25.923137 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 00:50:25.923147 kernel: x2apic enabled Sep 10 00:50:25.923153 kernel: Switched APIC routing to physical x2apic. Sep 10 00:50:25.923162 kernel: kvm-guest: setup PV IPIs Sep 10 00:50:25.923169 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 10 00:50:25.923183 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 10 00:50:25.923197 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 10 00:50:25.923204 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 00:50:25.923211 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 10 00:50:25.923218 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 10 00:50:25.923232 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 00:50:25.923251 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 00:50:25.923260 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 00:50:25.923267 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 10 00:50:25.923274 kernel: active return thunk: retbleed_return_thunk Sep 10 00:50:25.923281 kernel: RETBleed: Mitigation: untrained return thunk Sep 10 00:50:25.923288 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 00:50:25.923295 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 10 00:50:25.923314 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 00:50:25.923324 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 00:50:25.923331 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 00:50:25.923340 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 00:50:25.923349 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 10 00:50:25.923373 kernel: Freeing SMP alternatives memory: 32K Sep 10 00:50:25.923384 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:50:25.923393 kernel: LSM: Security Framework initializing Sep 10 00:50:25.923405 kernel: SELinux: Initializing. Sep 10 00:50:25.923420 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:50:25.923435 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:50:25.923444 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 10 00:50:25.923453 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 10 00:50:25.923471 kernel: ... version: 0 Sep 10 00:50:25.923483 kernel: ... bit width: 48 Sep 10 00:50:25.923490 kernel: ... generic registers: 6 Sep 10 00:50:25.923497 kernel: ... value mask: 0000ffffffffffff Sep 10 00:50:25.923506 kernel: ... max period: 00007fffffffffff Sep 10 00:50:25.923513 kernel: ... fixed-purpose events: 0 Sep 10 00:50:25.923532 kernel: ... event mask: 000000000000003f Sep 10 00:50:25.923539 kernel: signal: max sigframe size: 1776 Sep 10 00:50:25.923546 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:50:25.923553 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:50:25.923560 kernel: x86: Booting SMP configuration: Sep 10 00:50:25.923567 kernel: .... node #0, CPUs: #1 Sep 10 00:50:25.923586 kernel: kvm-clock: cpu 1, msr 9919f041, secondary cpu clock Sep 10 00:50:25.923595 kernel: kvm-guest: setup async PF for cpu 1 Sep 10 00:50:25.923602 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Sep 10 00:50:25.923609 kernel: #2 Sep 10 00:50:25.923616 kernel: kvm-clock: cpu 2, msr 9919f081, secondary cpu clock Sep 10 00:50:25.923623 kernel: kvm-guest: setup async PF for cpu 2 Sep 10 00:50:25.923630 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Sep 10 00:50:25.923641 kernel: #3 Sep 10 00:50:25.923648 kernel: kvm-clock: cpu 3, msr 9919f0c1, secondary cpu clock Sep 10 00:50:25.923655 kernel: kvm-guest: setup async PF for cpu 3 Sep 10 00:50:25.923662 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Sep 10 00:50:25.923670 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:50:25.923677 kernel: smpboot: Max logical packages: 1 Sep 10 00:50:25.923684 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 10 00:50:25.923691 kernel: devtmpfs: initialized Sep 10 00:50:25.923698 kernel: x86/mm: Memory block size: 128MB Sep 10 00:50:25.923705 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:50:25.923712 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:50:25.923719 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:50:25.923726 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:50:25.923734 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:50:25.923742 kernel: audit: type=2000 audit(1757465425.147:1): state=initialized audit_enabled=0 res=1 Sep 10 00:50:25.923748 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:50:25.923755 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 00:50:25.923762 kernel: cpuidle: using governor menu Sep 10 00:50:25.923769 kernel: ACPI: bus type PCI registered Sep 10 00:50:25.923776 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:50:25.923783 kernel: dca service started, version 1.12.1 Sep 10 00:50:25.923790 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 10 00:50:25.923799 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 10 00:50:25.923806 kernel: PCI: Using configuration type 1 for base access Sep 10 00:50:25.923813 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 00:50:25.923820 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:50:25.923827 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:50:25.923834 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:50:25.923841 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:50:25.923848 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:50:25.923854 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 10 00:50:25.923863 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 10 00:50:25.923870 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 10 00:50:25.923877 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:50:25.923884 kernel: ACPI: Interpreter enabled Sep 10 00:50:25.923891 kernel: ACPI: PM: (supports S0 S3 S5) Sep 10 00:50:25.923897 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 00:50:25.923904 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 00:50:25.923911 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 00:50:25.923918 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:50:25.924065 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:50:25.924307 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 10 00:50:25.924466 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 10 00:50:25.924478 kernel: PCI host bridge to bus 0000:00 Sep 10 00:50:25.924608 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 00:50:25.924684 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 00:50:25.924765 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 00:50:25.924831 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 10 00:50:25.924897 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 10 00:50:25.924968 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 10 00:50:25.925036 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:50:25.925168 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 10 00:50:25.925281 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 10 00:50:25.925364 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 10 00:50:25.925441 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 10 00:50:25.925516 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 10 00:50:25.925606 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 00:50:25.925721 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:50:25.925800 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 10 00:50:25.925888 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 10 00:50:25.925970 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 10 00:50:25.926059 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 10 00:50:25.926147 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 10 00:50:25.926224 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 10 00:50:25.926297 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 10 00:50:25.926387 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 10 00:50:25.926467 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 10 00:50:25.926542 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 10 00:50:25.926634 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 10 00:50:25.926710 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 10 00:50:25.926799 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 10 00:50:25.926878 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 00:50:25.926964 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 10 00:50:25.927043 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 10 00:50:25.927127 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 10 00:50:25.927220 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 10 00:50:25.927295 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 10 00:50:25.927305 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 00:50:25.927312 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 00:50:25.927320 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 00:50:25.927327 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 00:50:25.927337 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 00:50:25.927344 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 00:50:25.927351 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 00:50:25.927359 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 00:50:25.927366 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 00:50:25.927374 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 00:50:25.927381 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 00:50:25.927388 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 00:50:25.927396 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 00:50:25.927405 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 00:50:25.927412 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 00:50:25.927419 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 00:50:25.927427 kernel: iommu: Default domain type: Translated Sep 10 00:50:25.927434 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 00:50:25.927508 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 00:50:25.927595 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 00:50:25.927672 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 00:50:25.927688 kernel: vgaarb: loaded Sep 10 00:50:25.927695 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 10 00:50:25.927705 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 10 00:50:25.927713 kernel: PTP clock support registered Sep 10 00:50:25.927720 kernel: PCI: Using ACPI for IRQ routing Sep 10 00:50:25.927727 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 00:50:25.927735 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 10 00:50:25.927742 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 10 00:50:25.927749 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 10 00:50:25.927758 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 10 00:50:25.927766 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 00:50:25.927775 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:50:25.927792 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:50:25.927803 kernel: pnp: PnP ACPI init Sep 10 00:50:25.927934 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 10 00:50:25.927946 kernel: pnp: PnP ACPI: found 6 devices Sep 10 00:50:25.927954 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 00:50:25.927965 kernel: NET: Registered PF_INET protocol family Sep 10 00:50:25.927972 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:50:25.927980 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:50:25.927987 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:50:25.927994 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:50:25.928002 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 10 00:50:25.928009 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:50:25.928017 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:50:25.928024 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:50:25.928033 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:50:25.928041 kernel: NET: Registered PF_XDP protocol family Sep 10 00:50:25.928111 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 00:50:25.928190 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 00:50:25.928256 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 00:50:25.928322 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 10 00:50:25.928388 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 10 00:50:25.928453 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 10 00:50:25.928467 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:50:25.928474 kernel: Initialise system trusted keyrings Sep 10 00:50:25.928482 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:50:25.928489 kernel: Key type asymmetric registered Sep 10 00:50:25.928496 kernel: Asymmetric key parser 'x509' registered Sep 10 00:50:25.928503 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 00:50:25.928511 kernel: io scheduler mq-deadline registered Sep 10 00:50:25.928518 kernel: io scheduler kyber registered Sep 10 00:50:25.928526 kernel: io scheduler bfq registered Sep 10 00:50:25.928535 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 00:50:25.928543 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 00:50:25.928550 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 00:50:25.928557 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 00:50:25.928564 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:50:25.928586 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 00:50:25.928594 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 00:50:25.928601 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 00:50:25.928608 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 00:50:25.928619 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 00:50:25.928709 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 10 00:50:25.928779 kernel: rtc_cmos 00:04: registered as rtc0 Sep 10 00:50:25.928847 kernel: rtc_cmos 00:04: setting system clock to 2025-09-10T00:50:25 UTC (1757465425) Sep 10 00:50:25.928916 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 10 00:50:25.928926 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:50:25.928934 kernel: Segment Routing with IPv6 Sep 10 00:50:25.928941 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:50:25.928951 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:50:25.928959 kernel: Key type dns_resolver registered Sep 10 00:50:25.928966 kernel: IPI shorthand broadcast: enabled Sep 10 00:50:25.928973 kernel: sched_clock: Marking stable (431002620, 99993648)->(606090229, -75093961) Sep 10 00:50:25.928980 kernel: registered taskstats version 1 Sep 10 00:50:25.928987 kernel: Loading compiled-in X.509 certificates Sep 10 00:50:25.928995 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.191-flatcar: 3af57cd809cc9e43d7af9f276bb20b532a4147af' Sep 10 00:50:25.929002 kernel: Key type .fscrypt registered Sep 10 00:50:25.929009 kernel: Key type fscrypt-provisioning registered Sep 10 00:50:25.929018 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:50:25.929025 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:50:25.929033 kernel: ima: No architecture policies found Sep 10 00:50:25.929040 kernel: clk: Disabling unused clocks Sep 10 00:50:25.929047 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 10 00:50:25.929055 kernel: Write protecting the kernel read-only data: 28672k Sep 10 00:50:25.929062 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 10 00:50:25.929069 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 10 00:50:25.929076 kernel: Run /init as init process Sep 10 00:50:25.929085 kernel: with arguments: Sep 10 00:50:25.929092 kernel: /init Sep 10 00:50:25.929099 kernel: with environment: Sep 10 00:50:25.929107 kernel: HOME=/ Sep 10 00:50:25.929122 kernel: TERM=linux Sep 10 00:50:25.929129 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:50:25.929139 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 10 00:50:25.929150 systemd[1]: Detected virtualization kvm. Sep 10 00:50:25.929160 systemd[1]: Detected architecture x86-64. Sep 10 00:50:25.929167 systemd[1]: Running in initrd. Sep 10 00:50:25.929175 systemd[1]: No hostname configured, using default hostname. Sep 10 00:50:25.929183 systemd[1]: Hostname set to . Sep 10 00:50:25.929191 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:50:25.929199 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:50:25.929206 systemd[1]: Started systemd-ask-password-console.path. Sep 10 00:50:25.929214 systemd[1]: Reached target cryptsetup.target. Sep 10 00:50:25.929224 systemd[1]: Reached target paths.target. Sep 10 00:50:25.929232 systemd[1]: Reached target slices.target. Sep 10 00:50:25.929248 systemd[1]: Reached target swap.target. Sep 10 00:50:25.929257 systemd[1]: Reached target timers.target. Sep 10 00:50:25.929266 systemd[1]: Listening on iscsid.socket. Sep 10 00:50:25.929275 systemd[1]: Listening on iscsiuio.socket. Sep 10 00:50:25.929283 systemd[1]: Listening on systemd-journald-audit.socket. Sep 10 00:50:25.929291 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 10 00:50:25.929299 systemd[1]: Listening on systemd-journald.socket. Sep 10 00:50:25.929307 systemd[1]: Listening on systemd-networkd.socket. Sep 10 00:50:25.929315 systemd[1]: Listening on systemd-udevd-control.socket. Sep 10 00:50:25.929323 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 10 00:50:25.929330 systemd[1]: Reached target sockets.target. Sep 10 00:50:25.929338 systemd[1]: Starting kmod-static-nodes.service... Sep 10 00:50:25.929347 systemd[1]: Finished network-cleanup.service. Sep 10 00:50:25.929355 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:50:25.929363 systemd[1]: Starting systemd-journald.service... Sep 10 00:50:25.929371 systemd[1]: Starting systemd-modules-load.service... Sep 10 00:50:25.929379 systemd[1]: Starting systemd-resolved.service... Sep 10 00:50:25.929387 systemd[1]: Starting systemd-vconsole-setup.service... Sep 10 00:50:25.929395 systemd[1]: Finished kmod-static-nodes.service. Sep 10 00:50:25.929403 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:50:25.929411 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 10 00:50:25.929431 systemd-journald[198]: Journal started Sep 10 00:50:25.929481 systemd-journald[198]: Runtime Journal (/run/log/journal/43e575e54c074bbe80aa96552e5cf03e) is 6.0M, max 48.5M, 42.5M free. Sep 10 00:50:25.915658 systemd-modules-load[199]: Inserted module 'overlay' Sep 10 00:50:25.951023 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:50:25.951048 kernel: Bridge firewalling registered Sep 10 00:50:25.934390 systemd-resolved[200]: Positive Trust Anchors: Sep 10 00:50:25.934407 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:50:25.934434 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 10 00:50:25.936587 systemd-resolved[200]: Defaulting to hostname 'linux'. Sep 10 00:50:25.950111 systemd-modules-load[199]: Inserted module 'br_netfilter' Sep 10 00:50:25.961224 systemd[1]: Started systemd-journald.service. Sep 10 00:50:25.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:25.962176 systemd[1]: Started systemd-resolved.service. Sep 10 00:50:25.966514 kernel: audit: type=1130 audit(1757465425.961:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:25.966534 kernel: audit: type=1130 audit(1757465425.965:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:25.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:25.966736 systemd[1]: Finished systemd-vconsole-setup.service. Sep 10 00:50:25.971511 kernel: SCSI subsystem initialized Sep 10 00:50:25.971530 kernel: audit: type=1130 audit(1757465425.970:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:25.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:25.971704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 10 00:50:25.979673 kernel: audit: type=1130 audit(1757465425.974:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:25.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:25.975356 systemd[1]: Reached target nss-lookup.target. Sep 10 00:50:25.980278 systemd[1]: Starting dracut-cmdline-ask.service... Sep 10 00:50:25.984595 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:50:25.984612 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:50:25.984621 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 10 00:50:25.988530 systemd-modules-load[199]: Inserted module 'dm_multipath' Sep 10 00:50:25.993080 kernel: audit: type=1130 audit(1757465425.988:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:25.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:25.989361 systemd[1]: Finished systemd-modules-load.service. Sep 10 00:50:25.993216 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:50:25.996490 systemd[1]: Finished dracut-cmdline-ask.service. Sep 10 00:50:25.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:25.998972 systemd[1]: Starting dracut-cmdline.service... Sep 10 00:50:26.002561 kernel: audit: type=1130 audit(1757465425.997:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:26.002863 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:50:26.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:26.007616 kernel: audit: type=1130 audit(1757465426.003:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:26.007862 dracut-cmdline[221]: dracut-dracut-053 Sep 10 00:50:26.010186 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:50:26.072614 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:50:26.088606 kernel: iscsi: registered transport (tcp) Sep 10 00:50:26.110033 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:50:26.110106 kernel: QLogic iSCSI HBA Driver Sep 10 00:50:26.141978 systemd[1]: Finished dracut-cmdline.service. Sep 10 00:50:26.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:26.144456 systemd[1]: Starting dracut-pre-udev.service... Sep 10 00:50:26.148164 kernel: audit: type=1130 audit(1757465426.143:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:26.189614 kernel: raid6: avx2x4 gen() 30077 MB/s Sep 10 00:50:26.206597 kernel: raid6: avx2x4 xor() 7520 MB/s Sep 10 00:50:26.223601 kernel: raid6: avx2x2 gen() 32337 MB/s Sep 10 00:50:26.240594 kernel: raid6: avx2x2 xor() 19035 MB/s Sep 10 00:50:26.257597 kernel: raid6: avx2x1 gen() 20077 MB/s Sep 10 00:50:26.274615 kernel: raid6: avx2x1 xor() 12420 MB/s Sep 10 00:50:26.291599 kernel: raid6: sse2x4 gen() 14619 MB/s Sep 10 00:50:26.308597 kernel: raid6: sse2x4 xor() 7250 MB/s Sep 10 00:50:26.325599 kernel: raid6: sse2x2 gen() 16220 MB/s Sep 10 00:50:26.342601 kernel: raid6: sse2x2 xor() 9671 MB/s Sep 10 00:50:26.359600 kernel: raid6: sse2x1 gen() 11835 MB/s Sep 10 00:50:26.376956 kernel: raid6: sse2x1 xor() 7732 MB/s Sep 10 00:50:26.377022 kernel: raid6: using algorithm avx2x2 gen() 32337 MB/s Sep 10 00:50:26.377032 kernel: raid6: .... xor() 19035 MB/s, rmw enabled Sep 10 00:50:26.377640 kernel: raid6: using avx2x2 recovery algorithm Sep 10 00:50:26.389597 kernel: xor: automatically using best checksumming function avx Sep 10 00:50:26.481610 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 10 00:50:26.490514 systemd[1]: Finished dracut-pre-udev.service. Sep 10 00:50:26.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:26.494000 audit: BPF prog-id=7 op=LOAD Sep 10 00:50:26.494000 audit: BPF prog-id=8 op=LOAD Sep 10 00:50:26.495418 systemd[1]: Starting systemd-udevd.service... Sep 10 00:50:26.496890 kernel: audit: type=1130 audit(1757465426.491:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:26.507614 systemd-udevd[400]: Using default interface naming scheme 'v252'. Sep 10 00:50:26.511590 systemd[1]: Started systemd-udevd.service. Sep 10 00:50:26.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:26.514493 systemd[1]: Starting dracut-pre-trigger.service... Sep 10 00:50:26.523612 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Sep 10 00:50:26.549652 systemd[1]: Finished dracut-pre-trigger.service. Sep 10 00:50:26.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:26.551986 systemd[1]: Starting systemd-udev-trigger.service... Sep 10 00:50:26.586893 systemd[1]: Finished systemd-udev-trigger.service. Sep 10 00:50:26.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:26.614844 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:50:26.620912 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:50:26.620925 kernel: GPT:9289727 != 19775487 Sep 10 00:50:26.620934 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:50:26.620944 kernel: GPT:9289727 != 19775487 Sep 10 00:50:26.620953 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:50:26.620961 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:50:26.623598 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 00:50:26.634611 kernel: libata version 3.00 loaded. Sep 10 00:50:26.634636 kernel: AVX2 version of gcm_enc/dec engaged. Sep 10 00:50:26.635884 kernel: AES CTR mode by8 optimization enabled Sep 10 00:50:26.642602 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 00:50:26.660533 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 00:50:26.660552 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 10 00:50:26.660669 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 00:50:26.660750 kernel: scsi host0: ahci Sep 10 00:50:26.660852 kernel: scsi host1: ahci Sep 10 00:50:26.660942 kernel: scsi host2: ahci Sep 10 00:50:26.661048 kernel: scsi host3: ahci Sep 10 00:50:26.661148 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Sep 10 00:50:26.661159 kernel: scsi host4: ahci Sep 10 00:50:26.661251 kernel: scsi host5: ahci Sep 10 00:50:26.661341 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 10 00:50:26.661352 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 10 00:50:26.661361 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 10 00:50:26.661369 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 10 00:50:26.661379 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 10 00:50:26.661387 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 10 00:50:26.655816 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 10 00:50:26.702322 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 10 00:50:26.703423 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 10 00:50:26.708625 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 10 00:50:26.711668 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 10 00:50:26.713398 systemd[1]: Starting disk-uuid.service... Sep 10 00:50:26.722741 disk-uuid[520]: Primary Header is updated. Sep 10 00:50:26.722741 disk-uuid[520]: Secondary Entries is updated. Sep 10 00:50:26.722741 disk-uuid[520]: Secondary Header is updated. Sep 10 00:50:26.726595 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:50:26.730597 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:50:26.733595 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:50:26.971153 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 00:50:26.971220 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 10 00:50:26.971230 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 00:50:26.972601 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 00:50:26.973602 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 00:50:26.974616 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 00:50:26.975608 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 10 00:50:26.977027 kernel: ata3.00: applying bridge limits Sep 10 00:50:26.977048 kernel: ata3.00: configured for UDMA/100 Sep 10 00:50:26.977608 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 00:50:27.011608 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 10 00:50:27.033374 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 00:50:27.033386 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 10 00:50:27.734593 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:50:27.734841 disk-uuid[521]: The operation has completed successfully. Sep 10 00:50:27.756045 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:50:27.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:27.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:27.756137 systemd[1]: Finished disk-uuid.service. Sep 10 00:50:27.768557 systemd[1]: Starting verity-setup.service... Sep 10 00:50:27.783602 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 10 00:50:27.803394 systemd[1]: Found device dev-mapper-usr.device. Sep 10 00:50:27.806291 systemd[1]: Mounting sysusr-usr.mount... Sep 10 00:50:27.808259 systemd[1]: Finished verity-setup.service. Sep 10 00:50:27.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:27.867601 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 10 00:50:27.868093 systemd[1]: Mounted sysusr-usr.mount. Sep 10 00:50:27.868260 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 10 00:50:27.869539 systemd[1]: Starting ignition-setup.service... Sep 10 00:50:27.872034 systemd[1]: Starting parse-ip-for-networkd.service... Sep 10 00:50:27.881989 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:50:27.882027 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:50:27.882037 kernel: BTRFS info (device vda6): has skinny extents Sep 10 00:50:27.889566 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:50:27.932957 systemd[1]: Finished parse-ip-for-networkd.service. Sep 10 00:50:27.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:27.934000 audit: BPF prog-id=9 op=LOAD Sep 10 00:50:27.935144 systemd[1]: Starting systemd-networkd.service... Sep 10 00:50:27.955558 systemd-networkd[710]: lo: Link UP Sep 10 00:50:27.955568 systemd-networkd[710]: lo: Gained carrier Sep 10 00:50:27.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:27.956097 systemd-networkd[710]: Enumeration completed Sep 10 00:50:27.956172 systemd[1]: Started systemd-networkd.service. Sep 10 00:50:27.956362 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:50:27.957818 systemd[1]: Reached target network.target. Sep 10 00:50:27.958168 systemd-networkd[710]: eth0: Link UP Sep 10 00:50:27.958172 systemd-networkd[710]: eth0: Gained carrier Sep 10 00:50:27.960002 systemd[1]: Starting iscsiuio.service... Sep 10 00:50:27.974286 systemd[1]: Started iscsiuio.service. Sep 10 00:50:27.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:27.976692 systemd[1]: Starting iscsid.service... Sep 10 00:50:27.978645 systemd-networkd[710]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:50:27.980207 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 10 00:50:27.980207 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 10 00:50:27.980207 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 10 00:50:27.980207 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 10 00:50:27.980207 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 10 00:50:27.980207 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 10 00:50:27.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:27.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:28.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:27.980340 systemd[1]: Started iscsid.service. Sep 10 00:50:28.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:27.980988 systemd[1]: Starting dracut-initqueue.service... Sep 10 00:50:27.990949 systemd[1]: Finished dracut-initqueue.service. Sep 10 00:50:27.992024 systemd[1]: Reached target remote-fs-pre.target. Sep 10 00:50:27.992100 systemd[1]: Reached target remote-cryptsetup.target. Sep 10 00:50:27.992272 systemd[1]: Reached target remote-fs.target. Sep 10 00:50:27.993044 systemd[1]: Starting dracut-pre-mount.service... Sep 10 00:50:27.999911 systemd[1]: Finished ignition-setup.service. Sep 10 00:50:28.000887 systemd[1]: Finished dracut-pre-mount.service. Sep 10 00:50:28.003007 systemd[1]: Starting ignition-fetch-offline.service... Sep 10 00:50:28.040456 ignition[730]: Ignition 2.14.0 Sep 10 00:50:28.040471 ignition[730]: Stage: fetch-offline Sep 10 00:50:28.040528 ignition[730]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:50:28.040538 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:50:28.040656 ignition[730]: parsed url from cmdline: "" Sep 10 00:50:28.040659 ignition[730]: no config URL provided Sep 10 00:50:28.040664 ignition[730]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:50:28.040672 ignition[730]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:50:28.040691 ignition[730]: op(1): [started] loading QEMU firmware config module Sep 10 00:50:28.040695 ignition[730]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:50:28.044631 ignition[730]: op(1): [finished] loading QEMU firmware config module Sep 10 00:50:28.085834 ignition[730]: parsing config with SHA512: 3c9166731a43b13bc3b6523b67d1cca7d15417b42f794d06dc17985f0bf4b8bb420b8658b36794d815cb69d5d8c133e6609dad1a1f68c0c80a7124ef77d58bab Sep 10 00:50:28.092641 unknown[730]: fetched base config from "system" Sep 10 00:50:28.092652 unknown[730]: fetched user config from "qemu" Sep 10 00:50:28.093089 ignition[730]: fetch-offline: fetch-offline passed Sep 10 00:50:28.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:28.094523 systemd[1]: Finished ignition-fetch-offline.service. Sep 10 00:50:28.093142 ignition[730]: Ignition finished successfully Sep 10 00:50:28.096133 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:50:28.096924 systemd[1]: Starting ignition-kargs.service... Sep 10 00:50:28.107463 ignition[738]: Ignition 2.14.0 Sep 10 00:50:28.107472 ignition[738]: Stage: kargs Sep 10 00:50:28.107585 ignition[738]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:50:28.109930 systemd[1]: Finished ignition-kargs.service. Sep 10 00:50:28.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:28.107597 ignition[738]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:50:28.108624 ignition[738]: kargs: kargs passed Sep 10 00:50:28.112291 systemd[1]: Starting ignition-disks.service... Sep 10 00:50:28.108664 ignition[738]: Ignition finished successfully Sep 10 00:50:28.122758 ignition[744]: Ignition 2.14.0 Sep 10 00:50:28.122767 ignition[744]: Stage: disks Sep 10 00:50:28.122858 ignition[744]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:50:28.122867 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:50:28.126696 ignition[744]: disks: disks passed Sep 10 00:50:28.126741 ignition[744]: Ignition finished successfully Sep 10 00:50:28.128550 systemd[1]: Finished ignition-disks.service. Sep 10 00:50:28.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:28.128732 systemd[1]: Reached target initrd-root-device.target. Sep 10 00:50:28.130831 systemd[1]: Reached target local-fs-pre.target. Sep 10 00:50:28.132426 systemd[1]: Reached target local-fs.target. Sep 10 00:50:28.132801 systemd[1]: Reached target sysinit.target. Sep 10 00:50:28.135892 systemd[1]: Reached target basic.target. Sep 10 00:50:28.138204 systemd[1]: Starting systemd-fsck-root.service... Sep 10 00:50:28.148558 systemd-fsck[752]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 10 00:50:28.153604 systemd[1]: Finished systemd-fsck-root.service. Sep 10 00:50:28.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:28.154404 systemd[1]: Mounting sysroot.mount... Sep 10 00:50:28.161383 systemd[1]: Mounted sysroot.mount. Sep 10 00:50:28.163602 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 10 00:50:28.162127 systemd[1]: Reached target initrd-root-fs.target. Sep 10 00:50:28.164222 systemd[1]: Mounting sysroot-usr.mount... Sep 10 00:50:28.165326 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 10 00:50:28.165355 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:50:28.165374 systemd[1]: Reached target ignition-diskful.target. Sep 10 00:50:28.167318 systemd[1]: Mounted sysroot-usr.mount. Sep 10 00:50:28.169223 systemd[1]: Starting initrd-setup-root.service... Sep 10 00:50:28.173681 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:50:28.176228 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:50:28.180033 initrd-setup-root[778]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:50:28.182683 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:50:28.208047 systemd[1]: Finished initrd-setup-root.service. Sep 10 00:50:28.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:28.209644 systemd[1]: Starting ignition-mount.service... Sep 10 00:50:28.210976 systemd[1]: Starting sysroot-boot.service... Sep 10 00:50:28.217638 bash[804]: umount: /sysroot/usr/share/oem: not mounted. Sep 10 00:50:28.329930 systemd[1]: Finished sysroot-boot.service. Sep 10 00:50:28.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:28.333204 ignition[805]: INFO : Ignition 2.14.0 Sep 10 00:50:28.333204 ignition[805]: INFO : Stage: mount Sep 10 00:50:28.334821 ignition[805]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:50:28.334821 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:50:28.337587 ignition[805]: INFO : mount: mount passed Sep 10 00:50:28.338316 ignition[805]: INFO : Ignition finished successfully Sep 10 00:50:28.339847 systemd[1]: Finished ignition-mount.service. Sep 10 00:50:28.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:28.818903 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 10 00:50:28.824603 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Sep 10 00:50:28.826999 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:50:28.827036 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:50:28.827062 kernel: BTRFS info (device vda6): has skinny extents Sep 10 00:50:28.831507 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 10 00:50:28.833387 systemd[1]: Starting ignition-files.service... Sep 10 00:50:28.852522 ignition[833]: INFO : Ignition 2.14.0 Sep 10 00:50:28.852522 ignition[833]: INFO : Stage: files Sep 10 00:50:28.854271 ignition[833]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:50:28.854271 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:50:28.856351 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:50:28.857660 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:50:28.859214 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:50:28.860741 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:50:28.860741 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:50:28.860741 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:50:28.860399 unknown[833]: wrote ssh authorized keys file for user: core Sep 10 00:50:28.865928 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:50:28.865928 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 10 00:50:28.904696 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 00:50:29.089019 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:50:29.091157 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:50:29.091157 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 10 00:50:29.186083 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 00:50:29.364982 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:50:29.364982 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:50:29.368622 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:50:29.368622 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:50:29.372023 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:50:29.373654 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:50:29.375328 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:50:29.376989 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:50:29.378646 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:50:29.380409 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:50:29.382072 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:50:29.383705 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:50:29.386112 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:50:29.388446 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:50:29.390426 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 10 00:50:29.435733 systemd-networkd[710]: eth0: Gained IPv6LL Sep 10 00:50:29.641239 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 10 00:50:30.517878 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:50:30.517878 ignition[833]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 10 00:50:30.522349 ignition[833]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:50:30.522349 ignition[833]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:50:30.522349 ignition[833]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 10 00:50:30.522349 ignition[833]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 10 00:50:30.522349 ignition[833]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:50:30.522349 ignition[833]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:50:30.522349 ignition[833]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 10 00:50:30.522349 ignition[833]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:50:30.522349 ignition[833]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:50:30.522349 ignition[833]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:50:30.522349 ignition[833]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:50:30.557351 ignition[833]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:50:30.559037 ignition[833]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:50:30.560421 ignition[833]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:50:30.562122 ignition[833]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:50:30.564075 ignition[833]: INFO : files: files passed Sep 10 00:50:30.564777 ignition[833]: INFO : Ignition finished successfully Sep 10 00:50:30.566669 systemd[1]: Finished ignition-files.service. Sep 10 00:50:30.572786 kernel: kauditd_printk_skb: 23 callbacks suppressed Sep 10 00:50:30.572812 kernel: audit: type=1130 audit(1757465430.566:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.568489 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 10 00:50:30.572817 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 10 00:50:30.577908 initrd-setup-root-after-ignition[857]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 10 00:50:30.583045 kernel: audit: type=1130 audit(1757465430.577:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.573639 systemd[1]: Starting ignition-quench.service... Sep 10 00:50:30.590247 kernel: audit: type=1130 audit(1757465430.583:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.590264 kernel: audit: type=1131 audit(1757465430.583:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.590349 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:50:30.576765 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 10 00:50:30.578091 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:50:30.578163 systemd[1]: Finished ignition-quench.service. Sep 10 00:50:30.583155 systemd[1]: Reached target ignition-complete.target. Sep 10 00:50:30.590932 systemd[1]: Starting initrd-parse-etc.service... Sep 10 00:50:30.602181 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:50:30.602268 systemd[1]: Finished initrd-parse-etc.service. Sep 10 00:50:30.611029 kernel: audit: type=1130 audit(1757465430.603:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.611044 kernel: audit: type=1131 audit(1757465430.603:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.604095 systemd[1]: Reached target initrd-fs.target. Sep 10 00:50:30.611040 systemd[1]: Reached target initrd.target. Sep 10 00:50:30.611835 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 10 00:50:30.612696 systemd[1]: Starting dracut-pre-pivot.service... Sep 10 00:50:30.622299 systemd[1]: Finished dracut-pre-pivot.service. Sep 10 00:50:30.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.624714 systemd[1]: Starting initrd-cleanup.service... Sep 10 00:50:30.628142 kernel: audit: type=1130 audit(1757465430.623:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.632465 systemd[1]: Stopped target nss-lookup.target. Sep 10 00:50:30.633360 systemd[1]: Stopped target remote-cryptsetup.target. Sep 10 00:50:30.634902 systemd[1]: Stopped target timers.target. Sep 10 00:50:30.636459 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:50:30.642283 kernel: audit: type=1131 audit(1757465430.637:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.636550 systemd[1]: Stopped dracut-pre-pivot.service. Sep 10 00:50:30.638055 systemd[1]: Stopped target initrd.target. Sep 10 00:50:30.642357 systemd[1]: Stopped target basic.target. Sep 10 00:50:30.643880 systemd[1]: Stopped target ignition-complete.target. Sep 10 00:50:30.645427 systemd[1]: Stopped target ignition-diskful.target. Sep 10 00:50:30.646969 systemd[1]: Stopped target initrd-root-device.target. Sep 10 00:50:30.648660 systemd[1]: Stopped target remote-fs.target. Sep 10 00:50:30.650224 systemd[1]: Stopped target remote-fs-pre.target. Sep 10 00:50:30.651866 systemd[1]: Stopped target sysinit.target. Sep 10 00:50:30.653412 systemd[1]: Stopped target local-fs.target. Sep 10 00:50:30.654953 systemd[1]: Stopped target local-fs-pre.target. Sep 10 00:50:30.656468 systemd[1]: Stopped target swap.target. Sep 10 00:50:30.663877 kernel: audit: type=1131 audit(1757465430.658:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.657870 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:50:30.657966 systemd[1]: Stopped dracut-pre-mount.service. Sep 10 00:50:30.670154 kernel: audit: type=1131 audit(1757465430.665:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.659603 systemd[1]: Stopped target cryptsetup.target. Sep 10 00:50:30.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.663905 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:50:30.664003 systemd[1]: Stopped dracut-initqueue.service. Sep 10 00:50:30.665783 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:50:30.665869 systemd[1]: Stopped ignition-fetch-offline.service. Sep 10 00:50:30.670269 systemd[1]: Stopped target paths.target. Sep 10 00:50:30.671707 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:50:30.675678 systemd[1]: Stopped systemd-ask-password-console.path. Sep 10 00:50:30.677109 systemd[1]: Stopped target slices.target. Sep 10 00:50:30.678808 systemd[1]: Stopped target sockets.target. Sep 10 00:50:30.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.680394 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:50:30.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.680527 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 10 00:50:30.686885 iscsid[715]: iscsid shutting down. Sep 10 00:50:30.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.682068 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:50:30.682162 systemd[1]: Stopped ignition-files.service. Sep 10 00:50:30.684407 systemd[1]: Stopping ignition-mount.service... Sep 10 00:50:30.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.693646 ignition[874]: INFO : Ignition 2.14.0 Sep 10 00:50:30.693646 ignition[874]: INFO : Stage: umount Sep 10 00:50:30.693646 ignition[874]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:50:30.693646 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:50:30.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.685447 systemd[1]: Stopping iscsid.service... Sep 10 00:50:30.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.698794 ignition[874]: INFO : umount: umount passed Sep 10 00:50:30.698794 ignition[874]: INFO : Ignition finished successfully Sep 10 00:50:30.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.686812 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:50:30.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.686945 systemd[1]: Stopped kmod-static-nodes.service. Sep 10 00:50:30.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.689793 systemd[1]: Stopping sysroot-boot.service... Sep 10 00:50:30.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.690924 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:50:30.691067 systemd[1]: Stopped systemd-udev-trigger.service. Sep 10 00:50:30.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.692755 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:50:30.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.692837 systemd[1]: Stopped dracut-pre-trigger.service. Sep 10 00:50:30.695627 systemd[1]: iscsid.service: Deactivated successfully. Sep 10 00:50:30.696303 systemd[1]: Stopped iscsid.service. Sep 10 00:50:30.698503 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:50:30.698599 systemd[1]: Stopped ignition-mount.service. Sep 10 00:50:30.699851 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:50:30.699919 systemd[1]: Closed iscsid.socket. Sep 10 00:50:30.700259 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:50:30.700293 systemd[1]: Stopped ignition-disks.service. Sep 10 00:50:30.701801 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:50:30.701833 systemd[1]: Stopped ignition-kargs.service. Sep 10 00:50:30.703915 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:50:30.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.704236 systemd[1]: Stopped ignition-setup.service. Sep 10 00:50:30.705447 systemd[1]: Stopping iscsiuio.service... Sep 10 00:50:30.706841 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:50:30.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.706916 systemd[1]: Finished initrd-cleanup.service. Sep 10 00:50:30.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.708599 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 10 00:50:30.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.708666 systemd[1]: Stopped iscsiuio.service. Sep 10 00:50:30.710602 systemd[1]: Stopped target network.target. Sep 10 00:50:30.711601 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:50:30.711631 systemd[1]: Closed iscsiuio.socket. Sep 10 00:50:30.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.713094 systemd[1]: Stopping systemd-networkd.service... Sep 10 00:50:30.714698 systemd[1]: Stopping systemd-resolved.service... Sep 10 00:50:30.719616 systemd-networkd[710]: eth0: DHCPv6 lease lost Sep 10 00:50:30.740000 audit: BPF prog-id=9 op=UNLOAD Sep 10 00:50:30.720537 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:50:30.741000 audit: BPF prog-id=6 op=UNLOAD Sep 10 00:50:30.720638 systemd[1]: Stopped systemd-networkd.service. Sep 10 00:50:30.724006 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:50:30.724038 systemd[1]: Closed systemd-networkd.socket. Sep 10 00:50:30.726224 systemd[1]: Stopping network-cleanup.service... Sep 10 00:50:30.726972 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:50:30.727024 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 10 00:50:30.729542 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:50:30.729586 systemd[1]: Stopped systemd-sysctl.service. Sep 10 00:50:30.732008 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:50:30.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.732047 systemd[1]: Stopped systemd-modules-load.service. Sep 10 00:50:30.732974 systemd[1]: Stopping systemd-udevd.service... Sep 10 00:50:30.735404 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:50:30.735473 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 00:50:30.736007 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:50:30.736090 systemd[1]: Stopped systemd-resolved.service. Sep 10 00:50:30.749846 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:50:30.750051 systemd[1]: Stopped systemd-udevd.service. Sep 10 00:50:30.755181 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:50:30.758303 systemd[1]: Stopped network-cleanup.service. Sep 10 00:50:30.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.762345 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:50:30.762385 systemd[1]: Closed systemd-udevd-control.socket. Sep 10 00:50:30.765002 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:50:30.765037 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 10 00:50:30.767586 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:50:30.768647 systemd[1]: Stopped dracut-pre-udev.service. Sep 10 00:50:30.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.770312 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:50:30.770347 systemd[1]: Stopped dracut-cmdline.service. Sep 10 00:50:30.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.772777 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:50:30.772814 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 10 00:50:30.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.776415 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 10 00:50:30.815533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:50:30.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.816440 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 10 00:50:30.819336 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:50:30.820404 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 10 00:50:30.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.935156 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:50:30.936152 systemd[1]: Stopped sysroot-boot.service. Sep 10 00:50:30.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.937806 systemd[1]: Reached target initrd-switch-root.target. Sep 10 00:50:30.939513 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:50:30.939552 systemd[1]: Stopped initrd-setup-root.service. Sep 10 00:50:30.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:30.942735 systemd[1]: Starting initrd-switch-root.service... Sep 10 00:50:30.960102 systemd[1]: Switching root. Sep 10 00:50:30.979410 systemd-journald[198]: Journal stopped Sep 10 00:50:34.620851 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Sep 10 00:50:34.620920 kernel: SELinux: Class mctp_socket not defined in policy. Sep 10 00:50:34.620939 kernel: SELinux: Class anon_inode not defined in policy. Sep 10 00:50:34.620949 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 10 00:50:34.620969 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:50:34.620978 kernel: SELinux: policy capability open_perms=1 Sep 10 00:50:34.620992 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:50:34.621002 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:50:34.621011 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:50:34.621021 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:50:34.621031 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:50:34.621041 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:50:34.621051 systemd[1]: Successfully loaded SELinux policy in 38.461ms. Sep 10 00:50:34.621070 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.934ms. Sep 10 00:50:34.621082 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 10 00:50:34.621094 systemd[1]: Detected virtualization kvm. Sep 10 00:50:34.621105 systemd[1]: Detected architecture x86-64. Sep 10 00:50:34.621115 systemd[1]: Detected first boot. Sep 10 00:50:34.621126 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:50:34.621138 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 10 00:50:34.621148 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:50:34.621159 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:50:34.621175 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:50:34.621187 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:50:34.621201 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 00:50:34.621220 systemd[1]: Stopped initrd-switch-root.service. Sep 10 00:50:34.621231 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 00:50:34.621246 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 10 00:50:34.621257 systemd[1]: Created slice system-addon\x2drun.slice. Sep 10 00:50:34.621267 systemd[1]: Created slice system-getty.slice. Sep 10 00:50:34.621278 systemd[1]: Created slice system-modprobe.slice. Sep 10 00:50:34.621288 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 10 00:50:34.621299 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 10 00:50:34.621310 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 10 00:50:34.621321 systemd[1]: Created slice user.slice. Sep 10 00:50:34.621331 systemd[1]: Started systemd-ask-password-console.path. Sep 10 00:50:34.621343 systemd[1]: Started systemd-ask-password-wall.path. Sep 10 00:50:34.621354 systemd[1]: Set up automount boot.automount. Sep 10 00:50:34.621365 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 10 00:50:34.621375 systemd[1]: Stopped target initrd-switch-root.target. Sep 10 00:50:34.621386 systemd[1]: Stopped target initrd-fs.target. Sep 10 00:50:34.621396 systemd[1]: Stopped target initrd-root-fs.target. Sep 10 00:50:34.621407 systemd[1]: Reached target integritysetup.target. Sep 10 00:50:34.621418 systemd[1]: Reached target remote-cryptsetup.target. Sep 10 00:50:34.621430 systemd[1]: Reached target remote-fs.target. Sep 10 00:50:34.621441 systemd[1]: Reached target slices.target. Sep 10 00:50:34.621458 systemd[1]: Reached target swap.target. Sep 10 00:50:34.621469 systemd[1]: Reached target torcx.target. Sep 10 00:50:34.621480 systemd[1]: Reached target veritysetup.target. Sep 10 00:50:34.621491 systemd[1]: Listening on systemd-coredump.socket. Sep 10 00:50:34.621501 systemd[1]: Listening on systemd-initctl.socket. Sep 10 00:50:34.621511 systemd[1]: Listening on systemd-networkd.socket. Sep 10 00:50:34.621522 systemd[1]: Listening on systemd-udevd-control.socket. Sep 10 00:50:34.621535 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 10 00:50:34.621545 systemd[1]: Listening on systemd-userdbd.socket. Sep 10 00:50:34.621556 systemd[1]: Mounting dev-hugepages.mount... Sep 10 00:50:34.621566 systemd[1]: Mounting dev-mqueue.mount... Sep 10 00:50:34.621596 systemd[1]: Mounting media.mount... Sep 10 00:50:34.621608 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:50:34.621618 systemd[1]: Mounting sys-kernel-debug.mount... Sep 10 00:50:34.621629 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 10 00:50:34.621640 systemd[1]: Mounting tmp.mount... Sep 10 00:50:34.621653 systemd[1]: Starting flatcar-tmpfiles.service... Sep 10 00:50:34.621667 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:50:34.621678 systemd[1]: Starting kmod-static-nodes.service... Sep 10 00:50:34.621689 systemd[1]: Starting modprobe@configfs.service... Sep 10 00:50:34.621700 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:50:34.621710 systemd[1]: Starting modprobe@drm.service... Sep 10 00:50:34.621721 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:50:34.621731 systemd[1]: Starting modprobe@fuse.service... Sep 10 00:50:34.621742 systemd[1]: Starting modprobe@loop.service... Sep 10 00:50:34.621752 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:50:34.621765 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 00:50:34.621785 systemd[1]: Stopped systemd-fsck-root.service. Sep 10 00:50:34.621796 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 00:50:34.621806 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 00:50:34.621816 systemd[1]: Stopped systemd-journald.service. Sep 10 00:50:34.621827 kernel: loop: module loaded Sep 10 00:50:34.621837 kernel: fuse: init (API version 7.34) Sep 10 00:50:34.621847 systemd[1]: Starting systemd-journald.service... Sep 10 00:50:34.621858 systemd[1]: Starting systemd-modules-load.service... Sep 10 00:50:34.621870 systemd[1]: Starting systemd-network-generator.service... Sep 10 00:50:34.621881 systemd[1]: Starting systemd-remount-fs.service... Sep 10 00:50:34.621900 systemd[1]: Starting systemd-udev-trigger.service... Sep 10 00:50:34.621911 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 00:50:34.621922 systemd[1]: Stopped verity-setup.service. Sep 10 00:50:34.621933 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:50:34.621944 systemd[1]: Mounted dev-hugepages.mount. Sep 10 00:50:34.621955 systemd[1]: Mounted dev-mqueue.mount. Sep 10 00:50:34.621965 systemd[1]: Mounted media.mount. Sep 10 00:50:34.621978 systemd[1]: Mounted sys-kernel-debug.mount. Sep 10 00:50:34.621994 systemd-journald[989]: Journal started Sep 10 00:50:34.622032 systemd-journald[989]: Runtime Journal (/run/log/journal/43e575e54c074bbe80aa96552e5cf03e) is 6.0M, max 48.5M, 42.5M free. Sep 10 00:50:31.039000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 00:50:31.226000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 10 00:50:31.226000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 10 00:50:31.227000 audit: BPF prog-id=10 op=LOAD Sep 10 00:50:31.227000 audit: BPF prog-id=10 op=UNLOAD Sep 10 00:50:31.227000 audit: BPF prog-id=11 op=LOAD Sep 10 00:50:31.227000 audit: BPF prog-id=11 op=UNLOAD Sep 10 00:50:31.258000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 10 00:50:31.258000 audit[908]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c000180482 a1=c0001963c0 a2=c000186940 a3=32 items=0 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:50:31.258000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 10 00:50:31.260000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 10 00:50:31.260000 audit[908]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000180559 a2=1ed a3=0 items=2 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:50:31.260000 audit: CWD cwd="/" Sep 10 00:50:31.260000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:31.260000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:31.260000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 10 00:50:34.396000 audit: BPF prog-id=12 op=LOAD Sep 10 00:50:34.396000 audit: BPF prog-id=3 op=UNLOAD Sep 10 00:50:34.397000 audit: BPF prog-id=13 op=LOAD Sep 10 00:50:34.397000 audit: BPF prog-id=14 op=LOAD Sep 10 00:50:34.397000 audit: BPF prog-id=4 op=UNLOAD Sep 10 00:50:34.397000 audit: BPF prog-id=5 op=UNLOAD Sep 10 00:50:34.397000 audit: BPF prog-id=15 op=LOAD Sep 10 00:50:34.397000 audit: BPF prog-id=12 op=UNLOAD Sep 10 00:50:34.397000 audit: BPF prog-id=16 op=LOAD Sep 10 00:50:34.397000 audit: BPF prog-id=17 op=LOAD Sep 10 00:50:34.397000 audit: BPF prog-id=13 op=UNLOAD Sep 10 00:50:34.397000 audit: BPF prog-id=14 op=UNLOAD Sep 10 00:50:34.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.408000 audit: BPF prog-id=15 op=UNLOAD Sep 10 00:50:34.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.595000 audit: BPF prog-id=18 op=LOAD Sep 10 00:50:34.595000 audit: BPF prog-id=19 op=LOAD Sep 10 00:50:34.595000 audit: BPF prog-id=20 op=LOAD Sep 10 00:50:34.596000 audit: BPF prog-id=16 op=UNLOAD Sep 10 00:50:34.596000 audit: BPF prog-id=17 op=UNLOAD Sep 10 00:50:34.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.619000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 10 00:50:34.619000 audit[989]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdd7bb8910 a2=4000 a3=7ffdd7bb89ac items=0 ppid=1 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:50:34.619000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 10 00:50:34.395390 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:50:31.257735 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:50:34.395404 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 10 00:50:31.258065 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 10 00:50:34.398858 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 00:50:31.258085 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 10 00:50:31.258118 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 10 00:50:31.258127 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 10 00:50:31.258165 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 10 00:50:31.258179 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 10 00:50:31.258376 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 10 00:50:31.258414 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 10 00:50:31.258426 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 10 00:50:31.258793 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 10 00:50:31.258826 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 10 00:50:31.258842 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 10 00:50:31.258855 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 10 00:50:31.258869 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 10 00:50:34.624591 systemd[1]: Started systemd-journald.service. Sep 10 00:50:31.258882 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 10 00:50:34.026165 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:34Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 10 00:50:34.026436 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:34Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 10 00:50:34.026564 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:34Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 10 00:50:34.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.026775 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:34Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 10 00:50:34.026823 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:34Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 10 00:50:34.026893 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-09-10T00:50:34Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 10 00:50:34.625298 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 10 00:50:34.626156 systemd[1]: Mounted tmp.mount. Sep 10 00:50:34.627068 systemd[1]: Finished flatcar-tmpfiles.service. Sep 10 00:50:34.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.628134 systemd[1]: Finished kmod-static-nodes.service. Sep 10 00:50:34.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.629125 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:50:34.629306 systemd[1]: Finished modprobe@configfs.service. Sep 10 00:50:34.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.630317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:50:34.630494 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:50:34.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.631478 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:50:34.631638 systemd[1]: Finished modprobe@drm.service. Sep 10 00:50:34.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.632693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:50:34.632898 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:50:34.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.633911 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:50:34.634101 systemd[1]: Finished modprobe@fuse.service. Sep 10 00:50:34.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.635051 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:50:34.635198 systemd[1]: Finished modprobe@loop.service. Sep 10 00:50:34.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.636213 systemd[1]: Finished systemd-modules-load.service. Sep 10 00:50:34.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.637699 systemd[1]: Finished systemd-network-generator.service. Sep 10 00:50:34.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.638987 systemd[1]: Finished systemd-remount-fs.service. Sep 10 00:50:34.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.640362 systemd[1]: Reached target network-pre.target. Sep 10 00:50:34.642516 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 10 00:50:34.644618 systemd[1]: Mounting sys-kernel-config.mount... Sep 10 00:50:34.645405 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:50:34.647762 systemd[1]: Starting systemd-hwdb-update.service... Sep 10 00:50:34.649616 systemd[1]: Starting systemd-journal-flush.service... Sep 10 00:50:34.650807 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:50:34.651969 systemd[1]: Starting systemd-random-seed.service... Sep 10 00:50:34.653130 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:50:34.654324 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:50:34.656065 systemd-journald[989]: Time spent on flushing to /var/log/journal/43e575e54c074bbe80aa96552e5cf03e is 17.460ms for 1100 entries. Sep 10 00:50:34.656065 systemd-journald[989]: System Journal (/var/log/journal/43e575e54c074bbe80aa96552e5cf03e) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:50:34.689954 systemd-journald[989]: Received client request to flush runtime journal. Sep 10 00:50:34.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:34.657331 systemd[1]: Starting systemd-sysusers.service... Sep 10 00:50:34.661000 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 10 00:50:34.690524 udevadm[1012]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 10 00:50:34.662126 systemd[1]: Mounted sys-kernel-config.mount. Sep 10 00:50:34.666026 systemd[1]: Finished systemd-random-seed.service. Sep 10 00:50:34.667073 systemd[1]: Reached target first-boot-complete.target. Sep 10 00:50:34.670227 systemd[1]: Finished systemd-udev-trigger.service. Sep 10 00:50:34.671371 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:50:34.673555 systemd[1]: Starting systemd-udev-settle.service... Sep 10 00:50:34.680527 systemd[1]: Finished systemd-sysusers.service. Sep 10 00:50:34.690871 systemd[1]: Finished systemd-journal-flush.service. Sep 10 00:50:34.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.367813 systemd[1]: Finished systemd-hwdb-update.service. Sep 10 00:50:35.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.369000 audit: BPF prog-id=21 op=LOAD Sep 10 00:50:35.369000 audit: BPF prog-id=22 op=LOAD Sep 10 00:50:35.369000 audit: BPF prog-id=7 op=UNLOAD Sep 10 00:50:35.369000 audit: BPF prog-id=8 op=UNLOAD Sep 10 00:50:35.370648 systemd[1]: Starting systemd-udevd.service... Sep 10 00:50:35.387350 systemd-udevd[1014]: Using default interface naming scheme 'v252'. Sep 10 00:50:35.400956 systemd[1]: Started systemd-udevd.service. Sep 10 00:50:35.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.407000 audit: BPF prog-id=23 op=LOAD Sep 10 00:50:35.409505 systemd[1]: Starting systemd-networkd.service... Sep 10 00:50:35.415000 audit: BPF prog-id=24 op=LOAD Sep 10 00:50:35.415000 audit: BPF prog-id=25 op=LOAD Sep 10 00:50:35.415000 audit: BPF prog-id=26 op=LOAD Sep 10 00:50:35.416815 systemd[1]: Starting systemd-userdbd.service... Sep 10 00:50:35.435508 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 10 00:50:35.454865 systemd[1]: Started systemd-userdbd.service. Sep 10 00:50:35.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.481547 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 10 00:50:35.500000 audit[1028]: AVC avc: denied { confidentiality } for pid=1028 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 10 00:50:35.504691 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 10 00:50:35.521613 kernel: ACPI: button: Power Button [PWRF] Sep 10 00:50:35.500000 audit[1028]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5577fc357850 a1=338ec a2=7f5e904cabc5 a3=5 items=110 ppid=1014 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:50:35.500000 audit: CWD cwd="/" Sep 10 00:50:35.500000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=1 name=(null) inode=15982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=2 name=(null) inode=15982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=3 name=(null) inode=15983 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=4 name=(null) inode=15982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=5 name=(null) inode=15984 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=6 name=(null) inode=15982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=7 name=(null) inode=15985 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=8 name=(null) inode=15985 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=9 name=(null) inode=15986 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=10 name=(null) inode=15985 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=11 name=(null) inode=15987 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=12 name=(null) inode=15985 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=13 name=(null) inode=15988 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=14 name=(null) inode=15985 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=15 name=(null) inode=15989 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=16 name=(null) inode=15985 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=17 name=(null) inode=15990 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=18 name=(null) inode=15982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=19 name=(null) inode=15991 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=20 name=(null) inode=15991 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=21 name=(null) inode=15992 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=22 name=(null) inode=15991 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=23 name=(null) inode=15993 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=24 name=(null) inode=15991 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=25 name=(null) inode=15994 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=26 name=(null) inode=15991 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=27 name=(null) inode=15995 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=28 name=(null) inode=15991 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=29 name=(null) inode=15996 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=30 name=(null) inode=15982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=31 name=(null) inode=15997 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=32 name=(null) inode=15997 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=33 name=(null) inode=15998 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=34 name=(null) inode=15997 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=35 name=(null) inode=15999 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=36 name=(null) inode=15997 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=37 name=(null) inode=16000 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=38 name=(null) inode=15997 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=39 name=(null) inode=16001 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=40 name=(null) inode=15997 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=41 name=(null) inode=16002 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=42 name=(null) inode=15982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=43 name=(null) inode=16003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=44 name=(null) inode=16003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=45 name=(null) inode=16004 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=46 name=(null) inode=16003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=47 name=(null) inode=16005 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=48 name=(null) inode=16003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=49 name=(null) inode=16006 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=50 name=(null) inode=16003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=51 name=(null) inode=16007 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=52 name=(null) inode=16003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=53 name=(null) inode=16008 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=55 name=(null) inode=16009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=56 name=(null) inode=16009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=57 name=(null) inode=16010 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=58 name=(null) inode=16009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=59 name=(null) inode=16011 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=60 name=(null) inode=16009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=61 name=(null) inode=16012 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=62 name=(null) inode=16012 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=63 name=(null) inode=16013 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=64 name=(null) inode=16012 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=65 name=(null) inode=16014 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=66 name=(null) inode=16012 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=67 name=(null) inode=16015 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=68 name=(null) inode=16012 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=69 name=(null) inode=16016 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=70 name=(null) inode=16012 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=71 name=(null) inode=16017 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=72 name=(null) inode=16009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=73 name=(null) inode=16018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=74 name=(null) inode=16018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=75 name=(null) inode=16019 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=76 name=(null) inode=16018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=77 name=(null) inode=16020 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=78 name=(null) inode=16018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=79 name=(null) inode=16021 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=80 name=(null) inode=16018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=81 name=(null) inode=16022 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=82 name=(null) inode=16018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=83 name=(null) inode=16023 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=84 name=(null) inode=16009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=85 name=(null) inode=16024 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=86 name=(null) inode=16024 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=87 name=(null) inode=16025 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=88 name=(null) inode=16024 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=89 name=(null) inode=16026 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=90 name=(null) inode=16024 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=91 name=(null) inode=16027 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=92 name=(null) inode=16024 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=93 name=(null) inode=16028 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=94 name=(null) inode=16024 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=95 name=(null) inode=16029 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=96 name=(null) inode=16009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=97 name=(null) inode=16030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=98 name=(null) inode=16030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=99 name=(null) inode=16031 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=100 name=(null) inode=16030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=101 name=(null) inode=16032 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=102 name=(null) inode=16030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=103 name=(null) inode=16033 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=104 name=(null) inode=16030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=105 name=(null) inode=16034 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=106 name=(null) inode=16030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=107 name=(null) inode=16035 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PATH item=109 name=(null) inode=16036 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:50:35.500000 audit: PROCTITLE proctitle="(udev-worker)" Sep 10 00:50:35.544924 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 10 00:50:35.544358 systemd-networkd[1033]: lo: Link UP Sep 10 00:50:35.544366 systemd-networkd[1033]: lo: Gained carrier Sep 10 00:50:35.544749 systemd-networkd[1033]: Enumeration completed Sep 10 00:50:35.544841 systemd[1]: Started systemd-networkd.service. Sep 10 00:50:35.545400 systemd-networkd[1033]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:50:35.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.546721 systemd-networkd[1033]: eth0: Link UP Sep 10 00:50:35.546727 systemd-networkd[1033]: eth0: Gained carrier Sep 10 00:50:35.548610 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 00:50:35.548716 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 00:50:35.549136 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 10 00:50:35.549267 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 00:50:35.567724 systemd-networkd[1033]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:50:35.721898 kernel: kvm: Nested Virtualization enabled Sep 10 00:50:35.722008 kernel: SVM: kvm: Nested Paging enabled Sep 10 00:50:35.722024 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 10 00:50:35.723083 kernel: SVM: Virtual GIF supported Sep 10 00:50:35.740618 kernel: EDAC MC: Ver: 3.0.0 Sep 10 00:50:35.767932 systemd[1]: Finished systemd-udev-settle.service. Sep 10 00:50:35.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.769651 kernel: kauditd_printk_skb: 225 callbacks suppressed Sep 10 00:50:35.769687 kernel: audit: type=1130 audit(1757465435.768:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.770046 systemd[1]: Starting lvm2-activation-early.service... Sep 10 00:50:35.778070 lvm[1049]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:50:35.803384 systemd[1]: Finished lvm2-activation-early.service. Sep 10 00:50:35.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.804344 systemd[1]: Reached target cryptsetup.target. Sep 10 00:50:35.808224 kernel: audit: type=1130 audit(1757465435.803:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.809162 systemd[1]: Starting lvm2-activation.service... Sep 10 00:50:35.812740 lvm[1050]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:50:35.839223 systemd[1]: Finished lvm2-activation.service. Sep 10 00:50:35.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.850431 systemd[1]: Reached target local-fs-pre.target. Sep 10 00:50:35.853595 kernel: audit: type=1130 audit(1757465435.849:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.854206 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:50:35.854228 systemd[1]: Reached target local-fs.target. Sep 10 00:50:35.855007 systemd[1]: Reached target machines.target. Sep 10 00:50:35.856670 systemd[1]: Starting ldconfig.service... Sep 10 00:50:35.857733 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:50:35.857779 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:50:35.859046 systemd[1]: Starting systemd-boot-update.service... Sep 10 00:50:35.861180 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 10 00:50:35.863185 systemd[1]: Starting systemd-machine-id-commit.service... Sep 10 00:50:35.864978 systemd[1]: Starting systemd-sysext.service... Sep 10 00:50:35.866010 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1052 (bootctl) Sep 10 00:50:35.867090 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 10 00:50:35.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.871555 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 10 00:50:35.875613 kernel: audit: type=1130 audit(1757465435.871:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.876223 systemd[1]: Unmounting usr-share-oem.mount... Sep 10 00:50:35.880557 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 10 00:50:35.880760 systemd[1]: Unmounted usr-share-oem.mount. Sep 10 00:50:35.888598 kernel: loop0: detected capacity change from 0 to 221472 Sep 10 00:50:35.909791 systemd-fsck[1060]: fsck.fat 4.2 (2021-01-31) Sep 10 00:50:35.909791 systemd-fsck[1060]: /dev/vda1: 790 files, 120765/258078 clusters Sep 10 00:50:35.911770 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 10 00:50:35.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.917630 kernel: audit: type=1130 audit(1757465435.912:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.914441 systemd[1]: Mounting boot.mount... Sep 10 00:50:35.931969 systemd[1]: Mounted boot.mount. Sep 10 00:50:35.947230 systemd[1]: Finished systemd-boot-update.service. Sep 10 00:50:35.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:35.952610 kernel: audit: type=1130 audit(1757465435.947:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.744619 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:50:36.760792 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:50:36.761365 systemd[1]: Finished systemd-machine-id-commit.service. Sep 10 00:50:36.762296 kernel: loop1: detected capacity change from 0 to 221472 Sep 10 00:50:36.762354 kernel: audit: type=1130 audit(1757465436.761:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.764590 ldconfig[1051]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:50:36.768723 systemd[1]: Finished ldconfig.service. Sep 10 00:50:36.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.772601 kernel: audit: type=1130 audit(1757465436.769:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.773608 (sd-sysext)[1066]: Using extensions 'kubernetes'. Sep 10 00:50:36.773980 (sd-sysext)[1066]: Merged extensions into '/usr'. Sep 10 00:50:36.788714 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:50:36.790180 systemd[1]: Mounting usr-share-oem.mount... Sep 10 00:50:36.791106 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:50:36.792478 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:50:36.794604 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:50:36.796793 systemd[1]: Starting modprobe@loop.service... Sep 10 00:50:36.797602 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:50:36.797706 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:50:36.797810 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:50:36.800147 systemd[1]: Mounted usr-share-oem.mount. Sep 10 00:50:36.801254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:50:36.801363 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:50:36.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.802546 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:50:36.802667 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:50:36.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.808432 kernel: audit: type=1130 audit(1757465436.801:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.808477 kernel: audit: type=1131 audit(1757465436.801:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.809722 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:50:36.809825 systemd[1]: Finished modprobe@loop.service. Sep 10 00:50:36.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.811026 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:50:36.811117 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:50:36.811911 systemd[1]: Finished systemd-sysext.service. Sep 10 00:50:36.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:36.813991 systemd[1]: Starting ensure-sysext.service... Sep 10 00:50:36.815782 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 10 00:50:36.821140 systemd[1]: Reloading. Sep 10 00:50:36.828069 systemd-tmpfiles[1073]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 10 00:50:36.830060 systemd-tmpfiles[1073]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:50:36.833636 systemd-tmpfiles[1073]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:50:36.904754 /usr/lib/systemd/system-generators/torcx-generator[1093]: time="2025-09-10T00:50:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:50:36.904784 /usr/lib/systemd/system-generators/torcx-generator[1093]: time="2025-09-10T00:50:36Z" level=info msg="torcx already run" Sep 10 00:50:36.962237 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:50:36.962256 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:50:36.979258 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:50:37.031000 audit: BPF prog-id=27 op=LOAD Sep 10 00:50:37.031000 audit: BPF prog-id=23 op=UNLOAD Sep 10 00:50:37.032000 audit: BPF prog-id=28 op=LOAD Sep 10 00:50:37.033000 audit: BPF prog-id=29 op=LOAD Sep 10 00:50:37.033000 audit: BPF prog-id=21 op=UNLOAD Sep 10 00:50:37.033000 audit: BPF prog-id=22 op=UNLOAD Sep 10 00:50:37.033000 audit: BPF prog-id=30 op=LOAD Sep 10 00:50:37.033000 audit: BPF prog-id=18 op=UNLOAD Sep 10 00:50:37.033000 audit: BPF prog-id=31 op=LOAD Sep 10 00:50:37.033000 audit: BPF prog-id=32 op=LOAD Sep 10 00:50:37.033000 audit: BPF prog-id=19 op=UNLOAD Sep 10 00:50:37.033000 audit: BPF prog-id=20 op=UNLOAD Sep 10 00:50:37.034000 audit: BPF prog-id=33 op=LOAD Sep 10 00:50:37.034000 audit: BPF prog-id=24 op=UNLOAD Sep 10 00:50:37.034000 audit: BPF prog-id=34 op=LOAD Sep 10 00:50:37.034000 audit: BPF prog-id=35 op=LOAD Sep 10 00:50:37.035000 audit: BPF prog-id=25 op=UNLOAD Sep 10 00:50:37.035000 audit: BPF prog-id=26 op=UNLOAD Sep 10 00:50:37.039496 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 10 00:50:37.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.043049 systemd[1]: Starting audit-rules.service... Sep 10 00:50:37.044641 systemd[1]: Starting clean-ca-certificates.service... Sep 10 00:50:37.047059 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 10 00:50:37.048000 audit: BPF prog-id=36 op=LOAD Sep 10 00:50:37.049895 systemd[1]: Starting systemd-resolved.service... Sep 10 00:50:37.050000 audit: BPF prog-id=37 op=LOAD Sep 10 00:50:37.052087 systemd[1]: Starting systemd-timesyncd.service... Sep 10 00:50:37.054008 systemd[1]: Starting systemd-update-utmp.service... Sep 10 00:50:37.055373 systemd[1]: Finished clean-ca-certificates.service. Sep 10 00:50:37.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.058100 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:50:37.060570 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:50:37.060781 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:50:37.061927 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:50:37.061000 audit[1146]: SYSTEM_BOOT pid=1146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.064294 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:50:37.066318 systemd[1]: Starting modprobe@loop.service... Sep 10 00:50:37.067424 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:50:37.067536 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:50:37.067649 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:50:37.067720 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:50:37.068547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:50:37.068693 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:50:37.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.069862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:50:37.069971 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:50:37.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.071213 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:50:37.071319 systemd[1]: Finished modprobe@loop.service. Sep 10 00:50:37.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.074017 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:50:37.074164 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:50:37.075333 systemd[1]: Finished systemd-update-utmp.service. Sep 10 00:50:37.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.077715 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 10 00:50:37.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.079839 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:50:37.080040 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:50:37.081374 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:50:37.083351 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:50:37.085269 systemd[1]: Starting modprobe@loop.service... Sep 10 00:50:37.086064 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:50:37.086170 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:50:37.087416 systemd[1]: Starting systemd-update-done.service... Sep 10 00:50:37.088336 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:50:37.088427 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:50:37.089371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:50:37.089487 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:50:37.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.091888 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:50:37.092039 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:50:37.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:50:37.095000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 10 00:50:37.095922 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:50:37.096023 systemd[1]: Finished modprobe@loop.service. Sep 10 00:50:37.095000 audit[1160]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffccefd9750 a2=420 a3=0 items=0 ppid=1135 pid=1160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:50:37.095000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 10 00:50:37.096435 augenrules[1160]: No rules Sep 10 00:50:37.097281 systemd[1]: Finished audit-rules.service. Sep 10 00:50:37.098432 systemd[1]: Finished systemd-update-done.service. Sep 10 00:50:37.101560 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:50:37.102149 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:50:37.103337 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:50:37.105303 systemd[1]: Starting modprobe@drm.service... Sep 10 00:50:37.106569 systemd-timesyncd[1144]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:50:37.106638 systemd-timesyncd[1144]: Initial clock synchronization to Wed 2025-09-10 00:50:37.244184 UTC. Sep 10 00:50:37.107467 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:50:37.109297 systemd[1]: Starting modprobe@loop.service... Sep 10 00:50:37.110167 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:50:37.110271 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:50:37.111388 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 10 00:50:37.112560 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:50:37.112687 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:50:37.113675 systemd[1]: Started systemd-timesyncd.service. Sep 10 00:50:37.115558 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:50:37.115866 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:50:37.117103 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:50:37.117208 systemd[1]: Finished modprobe@drm.service. Sep 10 00:50:37.118376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:50:37.118482 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:50:37.119735 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:50:37.119850 systemd[1]: Finished modprobe@loop.service. Sep 10 00:50:37.121153 systemd[1]: Reached target time-set.target. Sep 10 00:50:37.122182 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:50:37.122219 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:50:37.122519 systemd[1]: Finished ensure-sysext.service. Sep 10 00:50:37.129655 systemd-resolved[1142]: Positive Trust Anchors: Sep 10 00:50:37.129670 systemd-resolved[1142]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:50:37.129704 systemd-resolved[1142]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 10 00:50:37.137101 systemd-resolved[1142]: Defaulting to hostname 'linux'. Sep 10 00:50:37.138557 systemd[1]: Started systemd-resolved.service. Sep 10 00:50:37.139545 systemd[1]: Reached target network.target. Sep 10 00:50:37.140356 systemd[1]: Reached target nss-lookup.target. Sep 10 00:50:37.141253 systemd[1]: Reached target sysinit.target. Sep 10 00:50:37.142139 systemd[1]: Started motdgen.path. Sep 10 00:50:37.142893 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 10 00:50:37.144136 systemd[1]: Started logrotate.timer. Sep 10 00:50:37.144971 systemd[1]: Started mdadm.timer. Sep 10 00:50:37.145721 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 10 00:50:37.146629 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:50:37.146650 systemd[1]: Reached target paths.target. Sep 10 00:50:37.147423 systemd[1]: Reached target timers.target. Sep 10 00:50:37.148563 systemd[1]: Listening on dbus.socket. Sep 10 00:50:37.150364 systemd[1]: Starting docker.socket... Sep 10 00:50:37.153235 systemd[1]: Listening on sshd.socket. Sep 10 00:50:37.154167 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:50:37.154540 systemd[1]: Listening on docker.socket. Sep 10 00:50:37.155454 systemd[1]: Reached target sockets.target. Sep 10 00:50:37.156324 systemd[1]: Reached target basic.target. Sep 10 00:50:37.157196 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 10 00:50:37.157224 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 10 00:50:37.158073 systemd[1]: Starting containerd.service... Sep 10 00:50:37.159952 systemd[1]: Starting dbus.service... Sep 10 00:50:37.161857 systemd[1]: Starting enable-oem-cloudinit.service... Sep 10 00:50:37.164635 systemd[1]: Starting extend-filesystems.service... Sep 10 00:50:37.165734 jq[1177]: false Sep 10 00:50:37.165756 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 10 00:50:37.166875 systemd[1]: Starting motdgen.service... Sep 10 00:50:37.168786 systemd[1]: Starting prepare-helm.service... Sep 10 00:50:37.170973 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 10 00:50:37.172930 systemd[1]: Starting sshd-keygen.service... Sep 10 00:50:37.176096 systemd[1]: Starting systemd-logind.service... Sep 10 00:50:37.177265 extend-filesystems[1178]: Found loop1 Sep 10 00:50:37.177265 extend-filesystems[1178]: Found sr0 Sep 10 00:50:37.177265 extend-filesystems[1178]: Found vda Sep 10 00:50:37.177265 extend-filesystems[1178]: Found vda1 Sep 10 00:50:37.177265 extend-filesystems[1178]: Found vda2 Sep 10 00:50:37.177265 extend-filesystems[1178]: Found vda3 Sep 10 00:50:37.177265 extend-filesystems[1178]: Found usr Sep 10 00:50:37.177265 extend-filesystems[1178]: Found vda4 Sep 10 00:50:37.177265 extend-filesystems[1178]: Found vda6 Sep 10 00:50:37.177265 extend-filesystems[1178]: Found vda7 Sep 10 00:50:37.177265 extend-filesystems[1178]: Found vda9 Sep 10 00:50:37.177265 extend-filesystems[1178]: Checking size of /dev/vda9 Sep 10 00:50:37.176997 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:50:37.229064 dbus-daemon[1176]: [system] SELinux support is enabled Sep 10 00:50:37.177057 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:50:37.177437 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 00:50:37.230087 jq[1192]: true Sep 10 00:50:37.178114 systemd[1]: Starting update-engine.service... Sep 10 00:50:37.180415 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 10 00:50:37.182504 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:50:37.230557 jq[1196]: true Sep 10 00:50:37.182781 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 10 00:50:37.224350 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:50:37.224511 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 10 00:50:37.229237 systemd[1]: Started dbus.service. Sep 10 00:50:37.234695 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:50:37.234772 systemd[1]: Reached target system-config.target. Sep 10 00:50:37.235782 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:50:37.235801 systemd[1]: Reached target user-config.target. Sep 10 00:50:37.243418 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:50:37.243597 systemd[1]: Finished motdgen.service. Sep 10 00:50:37.251555 tar[1195]: linux-amd64/helm Sep 10 00:50:37.260963 update_engine[1189]: I0910 00:50:37.260766 1189 main.cc:92] Flatcar Update Engine starting Sep 10 00:50:37.461798 systemd-logind[1186]: Watching system buttons on /dev/input/event1 (Power Button) Sep 10 00:50:37.461849 systemd-logind[1186]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 00:50:37.462028 systemd-logind[1186]: New seat seat0. Sep 10 00:50:37.462698 systemd-networkd[1033]: eth0: Gained IPv6LL Sep 10 00:50:37.463700 systemd[1]: Started systemd-logind.service. Sep 10 00:50:37.464223 update_engine[1189]: I0910 00:50:37.463891 1189 update_check_scheduler.cc:74] Next update check in 4m14s Sep 10 00:50:37.464891 systemd[1]: Started update-engine.service. Sep 10 00:50:37.467780 systemd[1]: Started locksmithd.service. Sep 10 00:50:37.469347 bash[1223]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:50:37.469539 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 10 00:50:37.470929 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 10 00:50:37.472372 systemd[1]: Reached target network-online.target. Sep 10 00:50:37.473846 extend-filesystems[1178]: Resized partition /dev/vda9 Sep 10 00:50:37.475337 systemd[1]: Starting kubelet.service... Sep 10 00:50:37.477546 extend-filesystems[1227]: resize2fs 1.46.5 (30-Dec-2021) Sep 10 00:50:37.480148 env[1203]: time="2025-09-10T00:50:37.478296994Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 10 00:50:37.495600 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:50:37.508043 env[1203]: time="2025-09-10T00:50:37.508006359Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:50:37.508337 env[1203]: time="2025-09-10T00:50:37.508307153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:50:37.511534 env[1203]: time="2025-09-10T00:50:37.510535702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.191-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:50:37.511534 env[1203]: time="2025-09-10T00:50:37.510559596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:50:37.511534 env[1203]: time="2025-09-10T00:50:37.510880698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:50:37.511534 env[1203]: time="2025-09-10T00:50:37.510896949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:50:37.511534 env[1203]: time="2025-09-10T00:50:37.510908571Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 10 00:50:37.511534 env[1203]: time="2025-09-10T00:50:37.510917207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:50:37.511534 env[1203]: time="2025-09-10T00:50:37.510983191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:50:37.511534 env[1203]: time="2025-09-10T00:50:37.511219844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:50:37.511534 env[1203]: time="2025-09-10T00:50:37.511322888Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:50:37.511534 env[1203]: time="2025-09-10T00:50:37.511337104Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:50:37.533976 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:50:37.534010 env[1203]: time="2025-09-10T00:50:37.511379784Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 10 00:50:37.534010 env[1203]: time="2025-09-10T00:50:37.511389833Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:50:37.535924 extend-filesystems[1227]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:50:37.535924 extend-filesystems[1227]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:50:37.535924 extend-filesystems[1227]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:50:37.543152 extend-filesystems[1178]: Resized filesystem in /dev/vda9 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540219639Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540277267Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540289540Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540351807Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540371113Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540383366Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540394557Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540407191Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540433330Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540444531Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540455852Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540467083Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540647281Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:50:37.544263 env[1203]: time="2025-09-10T00:50:37.540740325Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:50:37.536429 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541042702Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541084160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541096754Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541166144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541179208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541190299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541200698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541211348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541222129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541232037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541241655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541252866Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541475965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541491894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:50:37.544999 env[1203]: time="2025-09-10T00:50:37.541502424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:50:37.536596 systemd[1]: Finished extend-filesystems.service. Sep 10 00:50:37.545372 env[1203]: time="2025-09-10T00:50:37.541528613Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:50:37.545372 env[1203]: time="2025-09-10T00:50:37.541542309Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 10 00:50:37.545372 env[1203]: time="2025-09-10T00:50:37.541552087Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:50:37.545372 env[1203]: time="2025-09-10T00:50:37.541593785Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 10 00:50:37.545372 env[1203]: time="2025-09-10T00:50:37.541639050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:50:37.545480 env[1203]: time="2025-09-10T00:50:37.541827253Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:50:37.545480 env[1203]: time="2025-09-10T00:50:37.541900030Z" level=info msg="Connect containerd service" Sep 10 00:50:37.545480 env[1203]: time="2025-09-10T00:50:37.541934855Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:50:37.545480 env[1203]: time="2025-09-10T00:50:37.542459669Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:50:37.545480 env[1203]: time="2025-09-10T00:50:37.542628966Z" level=info msg="Start subscribing containerd event" Sep 10 00:50:37.545480 env[1203]: time="2025-09-10T00:50:37.542686995Z" level=info msg="Start recovering state" Sep 10 00:50:37.545480 env[1203]: time="2025-09-10T00:50:37.542742579Z" level=info msg="Start event monitor" Sep 10 00:50:37.545480 env[1203]: time="2025-09-10T00:50:37.542764681Z" level=info msg="Start snapshots syncer" Sep 10 00:50:37.545480 env[1203]: time="2025-09-10T00:50:37.542776293Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:50:37.545480 env[1203]: time="2025-09-10T00:50:37.542782574Z" level=info msg="Start streaming server" Sep 10 00:50:37.545480 env[1203]: time="2025-09-10T00:50:37.543288152Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:50:37.545480 env[1203]: time="2025-09-10T00:50:37.543336804Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:50:37.551669 env[1203]: time="2025-09-10T00:50:37.551551556Z" level=info msg="containerd successfully booted in 0.074532s" Sep 10 00:50:37.551612 systemd[1]: Started containerd.service. Sep 10 00:50:37.683085 locksmithd[1226]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:50:37.746598 sshd_keygen[1202]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:50:37.768493 systemd[1]: Finished sshd-keygen.service. Sep 10 00:50:37.770879 systemd[1]: Starting issuegen.service... Sep 10 00:50:37.776314 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:50:37.776432 systemd[1]: Finished issuegen.service. Sep 10 00:50:37.778402 systemd[1]: Starting systemd-user-sessions.service... Sep 10 00:50:37.787315 systemd[1]: Finished systemd-user-sessions.service. Sep 10 00:50:37.789616 systemd[1]: Started getty@tty1.service. Sep 10 00:50:37.791469 systemd[1]: Started serial-getty@ttyS0.service. Sep 10 00:50:37.792746 systemd[1]: Reached target getty.target. Sep 10 00:50:38.074934 tar[1195]: linux-amd64/LICENSE Sep 10 00:50:38.075078 tar[1195]: linux-amd64/README.md Sep 10 00:50:38.079662 systemd[1]: Finished prepare-helm.service. Sep 10 00:50:38.329740 systemd[1]: Created slice system-sshd.slice. Sep 10 00:50:38.332885 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:48008.service. Sep 10 00:50:38.459507 sshd[1264]: Accepted publickey for core from 10.0.0.1 port 48008 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:50:38.462253 sshd[1264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:50:38.477856 systemd[1]: Created slice user-500.slice. Sep 10 00:50:38.481600 systemd[1]: Starting user-runtime-dir@500.service... Sep 10 00:50:38.484974 systemd-logind[1186]: New session 1 of user core. Sep 10 00:50:38.534996 systemd[1]: Finished user-runtime-dir@500.service. Sep 10 00:50:38.537979 systemd[1]: Starting user@500.service... Sep 10 00:50:38.542861 (systemd)[1267]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:50:38.622277 systemd[1267]: Queued start job for default target default.target. Sep 10 00:50:38.623042 systemd[1267]: Reached target paths.target. Sep 10 00:50:38.623064 systemd[1267]: Reached target sockets.target. Sep 10 00:50:38.623082 systemd[1267]: Reached target timers.target. Sep 10 00:50:38.623095 systemd[1267]: Reached target basic.target. Sep 10 00:50:38.623208 systemd[1]: Started user@500.service. Sep 10 00:50:38.623355 systemd[1267]: Reached target default.target. Sep 10 00:50:38.623384 systemd[1267]: Startup finished in 74ms. Sep 10 00:50:38.673489 systemd[1]: Started session-1.scope. Sep 10 00:50:38.728313 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:48016.service. Sep 10 00:50:38.772536 sshd[1276]: Accepted publickey for core from 10.0.0.1 port 48016 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:50:38.774790 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:50:38.778498 systemd-logind[1186]: New session 2 of user core. Sep 10 00:50:38.779260 systemd[1]: Started session-2.scope. Sep 10 00:50:38.879061 sshd[1276]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:38.881906 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:48016.service: Deactivated successfully. Sep 10 00:50:38.882431 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:50:38.882936 systemd-logind[1186]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:50:38.884043 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:48030.service. Sep 10 00:50:38.886296 systemd-logind[1186]: Removed session 2. Sep 10 00:50:38.926481 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 48030 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:50:38.928796 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:50:38.932509 systemd-logind[1186]: New session 3 of user core. Sep 10 00:50:38.932919 systemd[1]: Started session-3.scope. Sep 10 00:50:38.986791 systemd[1]: Started kubelet.service. Sep 10 00:50:38.988368 systemd[1]: Reached target multi-user.target. Sep 10 00:50:38.990573 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 10 00:50:38.993069 sshd[1282]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:38.996976 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:48030.service: Deactivated successfully. Sep 10 00:50:38.997896 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:50:38.998582 systemd-logind[1186]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:50:38.999862 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 10 00:50:39.000059 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 10 00:50:39.001498 systemd[1]: Startup finished in 688ms (kernel) + 5.234s (initrd) + 8.002s (userspace) = 13.924s. Sep 10 00:50:39.002372 systemd-logind[1186]: Removed session 3. Sep 10 00:50:39.633313 kubelet[1289]: E0910 00:50:39.633216 1289 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:50:39.634868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:50:39.634993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:50:39.635482 systemd[1]: kubelet.service: Consumed 1.952s CPU time. Sep 10 00:50:49.085863 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:60762.service. Sep 10 00:50:49.128037 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 60762 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:50:49.129328 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:50:49.132983 systemd-logind[1186]: New session 4 of user core. Sep 10 00:50:49.133760 systemd[1]: Started session-4.scope. Sep 10 00:50:49.189144 sshd[1299]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:49.192117 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:60762.service: Deactivated successfully. Sep 10 00:50:49.192704 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:50:49.193167 systemd-logind[1186]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:50:49.194284 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:60776.service. Sep 10 00:50:49.195124 systemd-logind[1186]: Removed session 4. Sep 10 00:50:49.236833 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 60776 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:50:49.237980 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:50:49.241555 systemd-logind[1186]: New session 5 of user core. Sep 10 00:50:49.242442 systemd[1]: Started session-5.scope. Sep 10 00:50:49.292422 sshd[1305]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:49.295145 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:60776.service: Deactivated successfully. Sep 10 00:50:49.295792 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:50:49.296424 systemd-logind[1186]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:50:49.297543 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:60788.service. Sep 10 00:50:49.298447 systemd-logind[1186]: Removed session 5. Sep 10 00:50:49.338292 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 60788 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:50:49.339759 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:50:49.343329 systemd-logind[1186]: New session 6 of user core. Sep 10 00:50:49.344323 systemd[1]: Started session-6.scope. Sep 10 00:50:49.398598 sshd[1311]: pam_unix(sshd:session): session closed for user core Sep 10 00:50:49.401564 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:60788.service: Deactivated successfully. Sep 10 00:50:49.402176 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:50:49.402755 systemd-logind[1186]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:50:49.403735 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:60802.service. Sep 10 00:50:49.404446 systemd-logind[1186]: Removed session 6. Sep 10 00:50:49.443603 sshd[1317]: Accepted publickey for core from 10.0.0.1 port 60802 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:50:49.444795 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:50:49.447960 systemd-logind[1186]: New session 7 of user core. Sep 10 00:50:49.448831 systemd[1]: Started session-7.scope. Sep 10 00:50:49.504396 sudo[1320]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:50:49.504611 sudo[1320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 10 00:50:49.566225 systemd[1]: Starting docker.service... Sep 10 00:50:49.633396 env[1331]: time="2025-09-10T00:50:49.633252112Z" level=info msg="Starting up" Sep 10 00:50:49.634826 env[1331]: time="2025-09-10T00:50:49.634800293Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 10 00:50:49.634887 env[1331]: time="2025-09-10T00:50:49.634829915Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 10 00:50:49.634887 env[1331]: time="2025-09-10T00:50:49.634851560Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 10 00:50:49.634887 env[1331]: time="2025-09-10T00:50:49.634862414Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 10 00:50:49.637255 env[1331]: time="2025-09-10T00:50:49.637228710Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 10 00:50:49.637303 env[1331]: time="2025-09-10T00:50:49.637261360Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 10 00:50:49.637303 env[1331]: time="2025-09-10T00:50:49.637285671Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 10 00:50:49.637343 env[1331]: time="2025-09-10T00:50:49.637302861Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 10 00:50:49.640567 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:50:49.640703 systemd[1]: Stopped kubelet.service. Sep 10 00:50:49.640747 systemd[1]: kubelet.service: Consumed 1.952s CPU time. Sep 10 00:50:49.642021 systemd[1]: Starting kubelet.service... Sep 10 00:50:49.791084 systemd[1]: Started kubelet.service. Sep 10 00:50:50.195807 kubelet[1345]: E0910 00:50:50.195723 1345 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:50:50.198650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:50:50.198773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:50:50.516002 env[1331]: time="2025-09-10T00:50:50.515864652Z" level=info msg="Loading containers: start." Sep 10 00:50:50.639637 kernel: Initializing XFRM netlink socket Sep 10 00:50:50.667132 env[1331]: time="2025-09-10T00:50:50.667075006Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 10 00:50:50.719786 systemd-networkd[1033]: docker0: Link UP Sep 10 00:50:50.735108 env[1331]: time="2025-09-10T00:50:50.735048947Z" level=info msg="Loading containers: done." Sep 10 00:50:50.745701 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1006281343-merged.mount: Deactivated successfully. Sep 10 00:50:50.750335 env[1331]: time="2025-09-10T00:50:50.750274110Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:50:50.750519 env[1331]: time="2025-09-10T00:50:50.750489044Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 10 00:50:50.750658 env[1331]: time="2025-09-10T00:50:50.750631372Z" level=info msg="Daemon has completed initialization" Sep 10 00:50:50.769910 systemd[1]: Started docker.service. Sep 10 00:50:50.778496 env[1331]: time="2025-09-10T00:50:50.778392693Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:50:51.580276 env[1203]: time="2025-09-10T00:50:51.580206576Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 10 00:50:52.903535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4066143690.mount: Deactivated successfully. Sep 10 00:50:54.691383 env[1203]: time="2025-09-10T00:50:54.691301109Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:54.693225 env[1203]: time="2025-09-10T00:50:54.693177245Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:54.695091 env[1203]: time="2025-09-10T00:50:54.695065005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:54.696734 env[1203]: time="2025-09-10T00:50:54.696685443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:54.697346 env[1203]: time="2025-09-10T00:50:54.697318833Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 10 00:50:54.698018 env[1203]: time="2025-09-10T00:50:54.697994537Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 10 00:50:57.182639 env[1203]: time="2025-09-10T00:50:57.182565709Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:57.184967 env[1203]: time="2025-09-10T00:50:57.184922516Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:57.189179 env[1203]: time="2025-09-10T00:50:57.189109809Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:57.191300 env[1203]: time="2025-09-10T00:50:57.191248972Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:50:57.192061 env[1203]: time="2025-09-10T00:50:57.192023074Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 10 00:50:57.192550 env[1203]: time="2025-09-10T00:50:57.192526702Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 10 00:50:59.934094 env[1203]: time="2025-09-10T00:50:59.933997283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:00.239318 env[1203]: time="2025-09-10T00:51:00.239126966Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:00.398965 env[1203]: time="2025-09-10T00:51:00.398897137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:00.410960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:51:00.411198 systemd[1]: Stopped kubelet.service. Sep 10 00:51:00.413242 systemd[1]: Starting kubelet.service... Sep 10 00:51:00.509363 systemd[1]: Started kubelet.service. Sep 10 00:51:00.598225 kubelet[1475]: E0910 00:51:00.598155 1475 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:51:00.600487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:51:00.600639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:51:00.614894 env[1203]: time="2025-09-10T00:51:00.614821926Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:00.616113 env[1203]: time="2025-09-10T00:51:00.616039045Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 10 00:51:00.616788 env[1203]: time="2025-09-10T00:51:00.616711698Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 10 00:51:03.727009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1981792062.mount: Deactivated successfully. Sep 10 00:51:05.043431 env[1203]: time="2025-09-10T00:51:05.043347217Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:05.045522 env[1203]: time="2025-09-10T00:51:05.045481716Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:05.047911 env[1203]: time="2025-09-10T00:51:05.047866459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:05.050443 env[1203]: time="2025-09-10T00:51:05.050379383Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 10 00:51:05.051325 env[1203]: time="2025-09-10T00:51:05.051293134Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 00:51:05.051544 env[1203]: time="2025-09-10T00:51:05.051521488Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:05.699012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2114907326.mount: Deactivated successfully. Sep 10 00:51:07.380713 env[1203]: time="2025-09-10T00:51:07.380632370Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:07.383049 env[1203]: time="2025-09-10T00:51:07.382981965Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:07.385205 env[1203]: time="2025-09-10T00:51:07.385163175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:07.387023 env[1203]: time="2025-09-10T00:51:07.386959665Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:07.387761 env[1203]: time="2025-09-10T00:51:07.387681837Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 10 00:51:07.388370 env[1203]: time="2025-09-10T00:51:07.388343234Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:51:08.063001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount222103585.mount: Deactivated successfully. Sep 10 00:51:08.071208 env[1203]: time="2025-09-10T00:51:08.071146580Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:08.073212 env[1203]: time="2025-09-10T00:51:08.073173474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:08.075064 env[1203]: time="2025-09-10T00:51:08.075015103Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:08.077744 env[1203]: time="2025-09-10T00:51:08.077708284Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:08.078253 env[1203]: time="2025-09-10T00:51:08.078216978Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 00:51:08.078809 env[1203]: time="2025-09-10T00:51:08.078770027Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 00:51:10.593040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1349228673.mount: Deactivated successfully. Sep 10 00:51:10.660855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 10 00:51:10.661117 systemd[1]: Stopped kubelet.service. Sep 10 00:51:10.663038 systemd[1]: Starting kubelet.service... Sep 10 00:51:10.760625 systemd[1]: Started kubelet.service. Sep 10 00:51:10.807599 kubelet[1486]: E0910 00:51:10.807510 1486 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:51:10.809491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:51:10.809634 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:51:15.319988 env[1203]: time="2025-09-10T00:51:15.319872693Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:15.323027 env[1203]: time="2025-09-10T00:51:15.322928531Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:15.325076 env[1203]: time="2025-09-10T00:51:15.325050852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:15.327114 env[1203]: time="2025-09-10T00:51:15.327075066Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:15.327945 env[1203]: time="2025-09-10T00:51:15.327905009Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 10 00:51:17.631355 systemd[1]: Stopped kubelet.service. Sep 10 00:51:17.633682 systemd[1]: Starting kubelet.service... Sep 10 00:51:17.658412 systemd[1]: Reloading. Sep 10 00:51:17.718632 /usr/lib/systemd/system-generators/torcx-generator[1542]: time="2025-09-10T00:51:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:51:17.719116 /usr/lib/systemd/system-generators/torcx-generator[1542]: time="2025-09-10T00:51:17Z" level=info msg="torcx already run" Sep 10 00:51:18.877049 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:51:18.877066 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:51:18.894166 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:51:18.972397 systemd[1]: Started kubelet.service. Sep 10 00:51:18.973629 systemd[1]: Stopping kubelet.service... Sep 10 00:51:18.973864 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:51:18.974012 systemd[1]: Stopped kubelet.service. Sep 10 00:51:18.975318 systemd[1]: Starting kubelet.service... Sep 10 00:51:19.078363 systemd[1]: Started kubelet.service. Sep 10 00:51:19.119913 kubelet[1589]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:51:19.119913 kubelet[1589]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:51:19.119913 kubelet[1589]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:51:19.120341 kubelet[1589]: I0910 00:51:19.119998 1589 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:51:19.308611 kubelet[1589]: I0910 00:51:19.308539 1589 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:51:19.308611 kubelet[1589]: I0910 00:51:19.308590 1589 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:51:19.309195 kubelet[1589]: I0910 00:51:19.309164 1589 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:51:19.330847 kubelet[1589]: E0910 00:51:19.330793 1589 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:51:19.331063 kubelet[1589]: I0910 00:51:19.331018 1589 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:51:19.336401 kubelet[1589]: E0910 00:51:19.336371 1589 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:51:19.336401 kubelet[1589]: I0910 00:51:19.336401 1589 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:51:19.341196 kubelet[1589]: I0910 00:51:19.341167 1589 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:51:19.341324 kubelet[1589]: I0910 00:51:19.341298 1589 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:51:19.341484 kubelet[1589]: I0910 00:51:19.341454 1589 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:51:19.341684 kubelet[1589]: I0910 00:51:19.341477 1589 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:51:19.341792 kubelet[1589]: I0910 00:51:19.341696 1589 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:51:19.341792 kubelet[1589]: I0910 00:51:19.341705 1589 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:51:19.341842 kubelet[1589]: I0910 00:51:19.341819 1589 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:51:19.347959 kubelet[1589]: I0910 00:51:19.347922 1589 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:51:19.347959 kubelet[1589]: I0910 00:51:19.347954 1589 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:51:19.348044 kubelet[1589]: I0910 00:51:19.348015 1589 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:51:19.348068 kubelet[1589]: I0910 00:51:19.348051 1589 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:51:19.455028 kubelet[1589]: I0910 00:51:19.454981 1589 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 10 00:51:19.455760 kubelet[1589]: W0910 00:51:19.455639 1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 10 00:51:19.455838 kubelet[1589]: E0910 00:51:19.455773 1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:51:19.455838 kubelet[1589]: I0910 00:51:19.455831 1589 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:51:19.455985 kubelet[1589]: W0910 00:51:19.455904 1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 10 00:51:19.455985 kubelet[1589]: W0910 00:51:19.455967 1589 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:51:19.456079 kubelet[1589]: E0910 00:51:19.456004 1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:51:19.459376 kubelet[1589]: I0910 00:51:19.459339 1589 server.go:1274] "Started kubelet" Sep 10 00:51:19.459727 kubelet[1589]: I0910 00:51:19.459688 1589 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:51:19.459989 kubelet[1589]: I0910 00:51:19.459412 1589 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:51:19.460270 kubelet[1589]: I0910 00:51:19.460046 1589 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:51:19.462566 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 10 00:51:19.462758 kubelet[1589]: I0910 00:51:19.462736 1589 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:51:19.462831 kubelet[1589]: I0910 00:51:19.462785 1589 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:51:19.468861 kubelet[1589]: I0910 00:51:19.468821 1589 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:51:19.470184 kubelet[1589]: I0910 00:51:19.470152 1589 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:51:19.470402 kubelet[1589]: E0910 00:51:19.470379 1589 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:51:19.472293 kubelet[1589]: E0910 00:51:19.472259 1589 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Sep 10 00:51:19.472709 kubelet[1589]: I0910 00:51:19.472683 1589 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:51:19.472835 kubelet[1589]: I0910 00:51:19.472756 1589 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:51:19.473407 kubelet[1589]: I0910 00:51:19.473387 1589 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:51:19.473670 kubelet[1589]: I0910 00:51:19.473638 1589 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:51:19.473942 kubelet[1589]: W0910 00:51:19.473908 1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 10 00:51:19.474075 kubelet[1589]: E0910 00:51:19.474052 1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:51:19.474173 kubelet[1589]: I0910 00:51:19.474157 1589 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:51:19.474620 kubelet[1589]: E0910 00:51:19.474563 1589 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:51:19.478479 kubelet[1589]: E0910 00:51:19.477368 1589 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c58ac82ec40a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:51:19.45929217 +0000 UTC m=+0.377547759,LastTimestamp:2025-09-10 00:51:19.45929217 +0000 UTC m=+0.377547759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:51:19.485343 kubelet[1589]: I0910 00:51:19.485271 1589 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:51:19.486824 kubelet[1589]: I0910 00:51:19.486774 1589 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:51:19.486824 kubelet[1589]: I0910 00:51:19.486815 1589 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:51:19.486951 kubelet[1589]: I0910 00:51:19.486846 1589 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:51:19.486951 kubelet[1589]: E0910 00:51:19.486882 1589 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:51:19.489881 kubelet[1589]: W0910 00:51:19.489844 1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 10 00:51:19.490156 kubelet[1589]: E0910 00:51:19.490097 1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:51:19.490505 kubelet[1589]: I0910 00:51:19.490475 1589 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:51:19.490638 kubelet[1589]: I0910 00:51:19.490621 1589 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:51:19.490752 kubelet[1589]: I0910 00:51:19.490736 1589 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:51:19.570917 kubelet[1589]: E0910 00:51:19.570762 1589 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:51:19.587260 kubelet[1589]: E0910 00:51:19.587200 1589 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:51:19.671044 kubelet[1589]: E0910 00:51:19.670971 1589 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:51:19.673608 kubelet[1589]: E0910 00:51:19.673535 1589 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Sep 10 00:51:19.760782 kubelet[1589]: E0910 00:51:19.760631 1589 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c58ac82ec40a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:51:19.45929217 +0000 UTC m=+0.377547759,LastTimestamp:2025-09-10 00:51:19.45929217 +0000 UTC m=+0.377547759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:51:19.771727 kubelet[1589]: E0910 00:51:19.771663 1589 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:51:19.788029 kubelet[1589]: E0910 00:51:19.788003 1589 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:51:19.823715 kubelet[1589]: I0910 00:51:19.823523 1589 policy_none.go:49] "None policy: Start" Sep 10 00:51:19.825036 kubelet[1589]: I0910 00:51:19.824996 1589 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:51:19.825036 kubelet[1589]: I0910 00:51:19.825044 1589 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:51:19.834136 systemd[1]: Created slice kubepods.slice. Sep 10 00:51:19.838474 systemd[1]: Created slice kubepods-burstable.slice. Sep 10 00:51:19.841098 systemd[1]: Created slice kubepods-besteffort.slice. Sep 10 00:51:19.849754 kubelet[1589]: I0910 00:51:19.849712 1589 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:51:19.850057 kubelet[1589]: I0910 00:51:19.849903 1589 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:51:19.850057 kubelet[1589]: I0910 00:51:19.849920 1589 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:51:19.850253 kubelet[1589]: I0910 00:51:19.850227 1589 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:51:19.856594 kubelet[1589]: E0910 00:51:19.856533 1589 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:51:19.952441 kubelet[1589]: I0910 00:51:19.952388 1589 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:51:19.952959 kubelet[1589]: E0910 00:51:19.952918 1589 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 10 00:51:20.074393 kubelet[1589]: E0910 00:51:20.074220 1589 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Sep 10 00:51:20.155216 kubelet[1589]: I0910 00:51:20.155151 1589 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:51:20.155713 kubelet[1589]: E0910 00:51:20.155673 1589 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 10 00:51:20.195309 systemd[1]: Created slice kubepods-burstable-podbf6e6fd5c5ce8535aa1087e41c81d614.slice. Sep 10 00:51:20.206132 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 10 00:51:20.214925 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 10 00:51:20.277922 kubelet[1589]: I0910 00:51:20.277843 1589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf6e6fd5c5ce8535aa1087e41c81d614-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf6e6fd5c5ce8535aa1087e41c81d614\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:51:20.277922 kubelet[1589]: I0910 00:51:20.277899 1589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:51:20.277922 kubelet[1589]: I0910 00:51:20.277931 1589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:51:20.278214 kubelet[1589]: I0910 00:51:20.277961 1589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:51:20.278214 kubelet[1589]: I0910 00:51:20.277998 1589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:51:20.278214 kubelet[1589]: I0910 00:51:20.278018 1589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf6e6fd5c5ce8535aa1087e41c81d614-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf6e6fd5c5ce8535aa1087e41c81d614\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:51:20.278214 kubelet[1589]: I0910 00:51:20.278038 1589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:51:20.278214 kubelet[1589]: I0910 00:51:20.278062 1589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:51:20.278382 kubelet[1589]: I0910 00:51:20.278091 1589 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf6e6fd5c5ce8535aa1087e41c81d614-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bf6e6fd5c5ce8535aa1087e41c81d614\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:51:20.347010 kubelet[1589]: W0910 00:51:20.346819 1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 10 00:51:20.347010 kubelet[1589]: E0910 00:51:20.346923 1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:51:20.505731 kubelet[1589]: E0910 00:51:20.505666 1589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:20.506426 env[1203]: time="2025-09-10T00:51:20.506389991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bf6e6fd5c5ce8535aa1087e41c81d614,Namespace:kube-system,Attempt:0,}" Sep 10 00:51:20.513667 kubelet[1589]: E0910 00:51:20.513641 1589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:20.514140 env[1203]: time="2025-09-10T00:51:20.514101703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 10 00:51:20.517486 kubelet[1589]: E0910 00:51:20.517449 1589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:20.517956 env[1203]: time="2025-09-10T00:51:20.517906284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 10 00:51:20.557328 kubelet[1589]: I0910 00:51:20.557283 1589 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:51:20.557819 kubelet[1589]: E0910 00:51:20.557761 1589 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 10 00:51:20.723984 kubelet[1589]: W0910 00:51:20.723904 1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 10 00:51:20.723984 kubelet[1589]: E0910 00:51:20.723975 1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:51:20.819157 kubelet[1589]: W0910 00:51:20.819054 1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 10 00:51:20.819313 kubelet[1589]: E0910 00:51:20.819163 1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:51:20.875633 kubelet[1589]: E0910 00:51:20.875561 1589 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="1.6s" Sep 10 00:51:21.002908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1211165941.mount: Deactivated successfully. Sep 10 00:51:21.012904 env[1203]: time="2025-09-10T00:51:21.012859193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:21.015842 env[1203]: time="2025-09-10T00:51:21.015793728Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:21.016679 env[1203]: time="2025-09-10T00:51:21.016646097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:21.017785 env[1203]: time="2025-09-10T00:51:21.017733834Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:21.020448 env[1203]: time="2025-09-10T00:51:21.020421549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:21.021971 env[1203]: time="2025-09-10T00:51:21.021936703Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:21.023409 env[1203]: time="2025-09-10T00:51:21.023378819Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:21.024983 env[1203]: time="2025-09-10T00:51:21.024956461Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:21.027138 env[1203]: time="2025-09-10T00:51:21.027108237Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:21.028504 env[1203]: time="2025-09-10T00:51:21.028472505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:21.029953 env[1203]: time="2025-09-10T00:51:21.029920434Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:21.030485 env[1203]: time="2025-09-10T00:51:21.030457053Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:21.035043 kubelet[1589]: W0910 00:51:21.034973 1589 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Sep 10 00:51:21.035127 kubelet[1589]: E0910 00:51:21.035055 1589 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:51:21.078850 env[1203]: time="2025-09-10T00:51:21.078660636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:51:21.078850 env[1203]: time="2025-09-10T00:51:21.078710859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:51:21.078850 env[1203]: time="2025-09-10T00:51:21.078720227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:51:21.079058 env[1203]: time="2025-09-10T00:51:21.078886695Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc8abfd298e246c93141b0cb3e58cee0172c28b4bc8685241182f5122e973ed3 pid=1637 runtime=io.containerd.runc.v2 Sep 10 00:51:21.081070 env[1203]: time="2025-09-10T00:51:21.080982779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:51:21.081070 env[1203]: time="2025-09-10T00:51:21.081047420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:51:21.081070 env[1203]: time="2025-09-10T00:51:21.081058462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:51:21.081828 env[1203]: time="2025-09-10T00:51:21.081712559Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc1ece1aef883c93c051d7368d8fb4c423ea627d7b591e606ebfa26f67b6ad63 pid=1640 runtime=io.containerd.runc.v2 Sep 10 00:51:21.091461 env[1203]: time="2025-09-10T00:51:21.091372211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:51:21.091461 env[1203]: time="2025-09-10T00:51:21.091407312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:51:21.091461 env[1203]: time="2025-09-10T00:51:21.091437413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:51:21.091710 env[1203]: time="2025-09-10T00:51:21.091631427Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cbec582bc3f73ed198c9d461d5fd2e189b2205241849ea1ec7da4df24fc2da0f pid=1665 runtime=io.containerd.runc.v2 Sep 10 00:51:21.105559 systemd[1]: Started cri-containerd-dc8abfd298e246c93141b0cb3e58cee0172c28b4bc8685241182f5122e973ed3.scope. Sep 10 00:51:21.140918 systemd[1]: Started cri-containerd-fc1ece1aef883c93c051d7368d8fb4c423ea627d7b591e606ebfa26f67b6ad63.scope. Sep 10 00:51:21.209012 systemd[1]: Started cri-containerd-cbec582bc3f73ed198c9d461d5fd2e189b2205241849ea1ec7da4df24fc2da0f.scope. Sep 10 00:51:21.279049 env[1203]: time="2025-09-10T00:51:21.278352528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bf6e6fd5c5ce8535aa1087e41c81d614,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc8abfd298e246c93141b0cb3e58cee0172c28b4bc8685241182f5122e973ed3\"" Sep 10 00:51:21.280330 kubelet[1589]: E0910 00:51:21.280102 1589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:21.283460 env[1203]: time="2025-09-10T00:51:21.282527259Z" level=info msg="CreateContainer within sandbox \"dc8abfd298e246c93141b0cb3e58cee0172c28b4bc8685241182f5122e973ed3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:51:21.285221 env[1203]: time="2025-09-10T00:51:21.285186295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc1ece1aef883c93c051d7368d8fb4c423ea627d7b591e606ebfa26f67b6ad63\"" Sep 10 00:51:21.286075 kubelet[1589]: E0910 00:51:21.286042 1589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:21.287591 env[1203]: time="2025-09-10T00:51:21.287543598Z" level=info msg="CreateContainer within sandbox \"fc1ece1aef883c93c051d7368d8fb4c423ea627d7b591e606ebfa26f67b6ad63\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:51:21.292027 env[1203]: time="2025-09-10T00:51:21.291995031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbec582bc3f73ed198c9d461d5fd2e189b2205241849ea1ec7da4df24fc2da0f\"" Sep 10 00:51:21.292633 kubelet[1589]: E0910 00:51:21.292610 1589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:21.294099 env[1203]: time="2025-09-10T00:51:21.294071365Z" level=info msg="CreateContainer within sandbox \"cbec582bc3f73ed198c9d461d5fd2e189b2205241849ea1ec7da4df24fc2da0f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:51:21.323490 env[1203]: time="2025-09-10T00:51:21.323455180Z" level=info msg="CreateContainer within sandbox \"fc1ece1aef883c93c051d7368d8fb4c423ea627d7b591e606ebfa26f67b6ad63\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"14f550d74e03f2c71fff5860f99f7c5085b87e587c6cda5f72f08d8e64bd7a70\"" Sep 10 00:51:21.324132 env[1203]: time="2025-09-10T00:51:21.324102123Z" level=info msg="StartContainer for \"14f550d74e03f2c71fff5860f99f7c5085b87e587c6cda5f72f08d8e64bd7a70\"" Sep 10 00:51:21.324352 env[1203]: time="2025-09-10T00:51:21.324296357Z" level=info msg="CreateContainer within sandbox \"dc8abfd298e246c93141b0cb3e58cee0172c28b4bc8685241182f5122e973ed3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec1c8c8c33224cc67597256ce33bebd844c6ef1e68bad2502f593ade6f43f63d\"" Sep 10 00:51:21.324737 env[1203]: time="2025-09-10T00:51:21.324709184Z" level=info msg="StartContainer for \"ec1c8c8c33224cc67597256ce33bebd844c6ef1e68bad2502f593ade6f43f63d\"" Sep 10 00:51:21.332353 env[1203]: time="2025-09-10T00:51:21.332309426Z" level=info msg="CreateContainer within sandbox \"cbec582bc3f73ed198c9d461d5fd2e189b2205241849ea1ec7da4df24fc2da0f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"db8e1b0ccb46ce80c9b9de9ffa71362617451b6a4ffd19ebdf4b69973c77e675\"" Sep 10 00:51:21.333059 env[1203]: time="2025-09-10T00:51:21.333005379Z" level=info msg="StartContainer for \"db8e1b0ccb46ce80c9b9de9ffa71362617451b6a4ffd19ebdf4b69973c77e675\"" Sep 10 00:51:21.342405 systemd[1]: Started cri-containerd-ec1c8c8c33224cc67597256ce33bebd844c6ef1e68bad2502f593ade6f43f63d.scope. Sep 10 00:51:21.346667 systemd[1]: Started cri-containerd-14f550d74e03f2c71fff5860f99f7c5085b87e587c6cda5f72f08d8e64bd7a70.scope. Sep 10 00:51:21.359681 kubelet[1589]: I0910 00:51:21.359322 1589 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:51:21.359681 kubelet[1589]: E0910 00:51:21.359649 1589 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Sep 10 00:51:21.361618 systemd[1]: Started cri-containerd-db8e1b0ccb46ce80c9b9de9ffa71362617451b6a4ffd19ebdf4b69973c77e675.scope. Sep 10 00:51:21.397725 env[1203]: time="2025-09-10T00:51:21.392725742Z" level=info msg="StartContainer for \"14f550d74e03f2c71fff5860f99f7c5085b87e587c6cda5f72f08d8e64bd7a70\" returns successfully" Sep 10 00:51:21.406856 env[1203]: time="2025-09-10T00:51:21.406783368Z" level=info msg="StartContainer for \"ec1c8c8c33224cc67597256ce33bebd844c6ef1e68bad2502f593ade6f43f63d\" returns successfully" Sep 10 00:51:21.414373 env[1203]: time="2025-09-10T00:51:21.414307476Z" level=info msg="StartContainer for \"db8e1b0ccb46ce80c9b9de9ffa71362617451b6a4ffd19ebdf4b69973c77e675\" returns successfully" Sep 10 00:51:21.497637 kubelet[1589]: E0910 00:51:21.497428 1589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:21.501068 kubelet[1589]: E0910 00:51:21.500962 1589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:21.503900 kubelet[1589]: E0910 00:51:21.503744 1589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:22.509084 kubelet[1589]: E0910 00:51:22.509054 1589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:22.801970 kubelet[1589]: E0910 00:51:22.801814 1589 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:51:22.962973 kubelet[1589]: I0910 00:51:22.962882 1589 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:51:22.975083 kubelet[1589]: I0910 00:51:22.974994 1589 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:51:23.078835 update_engine[1189]: I0910 00:51:23.078682 1189 update_attempter.cc:509] Updating boot flags... Sep 10 00:51:23.350710 kubelet[1589]: I0910 00:51:23.350567 1589 apiserver.go:52] "Watching apiserver" Sep 10 00:51:23.357012 kubelet[1589]: E0910 00:51:23.356976 1589 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 10 00:51:23.357161 kubelet[1589]: E0910 00:51:23.357145 1589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:23.374320 kubelet[1589]: I0910 00:51:23.374276 1589 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:51:25.020521 systemd[1]: Reloading. Sep 10 00:51:25.079972 /usr/lib/systemd/system-generators/torcx-generator[1901]: time="2025-09-10T00:51:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:51:25.080006 /usr/lib/systemd/system-generators/torcx-generator[1901]: time="2025-09-10T00:51:25Z" level=info msg="torcx already run" Sep 10 00:51:25.305584 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:51:25.305600 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:51:25.323125 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:51:25.416709 systemd[1]: Stopping kubelet.service... Sep 10 00:51:25.441046 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:51:25.441212 systemd[1]: Stopped kubelet.service. Sep 10 00:51:25.442830 systemd[1]: Starting kubelet.service... Sep 10 00:51:25.534257 systemd[1]: Started kubelet.service. Sep 10 00:51:25.579639 kubelet[1946]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:51:25.579639 kubelet[1946]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:51:25.579639 kubelet[1946]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:51:25.579639 kubelet[1946]: I0910 00:51:25.579555 1946 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:51:25.585962 kubelet[1946]: I0910 00:51:25.585923 1946 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:51:25.585962 kubelet[1946]: I0910 00:51:25.585955 1946 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:51:25.586241 kubelet[1946]: I0910 00:51:25.586215 1946 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:51:25.588277 kubelet[1946]: I0910 00:51:25.588250 1946 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 00:51:25.597283 kubelet[1946]: I0910 00:51:25.596134 1946 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:51:25.604133 kubelet[1946]: E0910 00:51:25.604078 1946 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:51:25.604133 kubelet[1946]: I0910 00:51:25.604134 1946 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:51:25.609500 kubelet[1946]: I0910 00:51:25.609469 1946 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:51:25.609641 kubelet[1946]: I0910 00:51:25.609620 1946 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:51:25.609770 kubelet[1946]: I0910 00:51:25.609735 1946 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:51:25.609928 kubelet[1946]: I0910 00:51:25.609767 1946 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:51:25.610022 kubelet[1946]: I0910 00:51:25.609935 1946 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:51:25.610022 kubelet[1946]: I0910 00:51:25.609942 1946 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:51:25.610022 kubelet[1946]: I0910 00:51:25.609977 1946 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:51:25.610091 kubelet[1946]: I0910 00:51:25.610068 1946 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:51:25.610091 kubelet[1946]: I0910 00:51:25.610080 1946 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:51:25.610142 kubelet[1946]: I0910 00:51:25.610105 1946 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:51:25.610142 kubelet[1946]: I0910 00:51:25.610116 1946 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:51:25.610781 kubelet[1946]: I0910 00:51:25.610695 1946 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 10 00:51:25.611047 kubelet[1946]: I0910 00:51:25.611033 1946 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:51:25.611456 kubelet[1946]: I0910 00:51:25.611443 1946 server.go:1274] "Started kubelet" Sep 10 00:51:25.613344 kubelet[1946]: I0910 00:51:25.613319 1946 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:51:25.618047 kubelet[1946]: I0910 00:51:25.616430 1946 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:51:25.618047 kubelet[1946]: I0910 00:51:25.617163 1946 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:51:25.618047 kubelet[1946]: I0910 00:51:25.617864 1946 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:51:25.618047 kubelet[1946]: I0910 00:51:25.618018 1946 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:51:25.619144 kubelet[1946]: I0910 00:51:25.618691 1946 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:51:25.619885 kubelet[1946]: I0910 00:51:25.619859 1946 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:51:25.619979 kubelet[1946]: I0910 00:51:25.619958 1946 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:51:25.620086 kubelet[1946]: I0910 00:51:25.620067 1946 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:51:25.623522 kubelet[1946]: I0910 00:51:25.623490 1946 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:51:25.623710 kubelet[1946]: I0910 00:51:25.623617 1946 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:51:25.625281 kubelet[1946]: E0910 00:51:25.625191 1946 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:51:25.625876 kubelet[1946]: E0910 00:51:25.625617 1946 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:51:25.628549 kubelet[1946]: I0910 00:51:25.628526 1946 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:51:25.630284 kubelet[1946]: I0910 00:51:25.629779 1946 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:51:25.632934 kubelet[1946]: I0910 00:51:25.632900 1946 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:51:25.632934 kubelet[1946]: I0910 00:51:25.632934 1946 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:51:25.633018 kubelet[1946]: I0910 00:51:25.632952 1946 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:51:25.633018 kubelet[1946]: E0910 00:51:25.632993 1946 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:51:25.657519 kubelet[1946]: I0910 00:51:25.657481 1946 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:51:25.657712 kubelet[1946]: I0910 00:51:25.657694 1946 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:51:25.657801 kubelet[1946]: I0910 00:51:25.657787 1946 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:51:25.657995 kubelet[1946]: I0910 00:51:25.657980 1946 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:51:25.658084 kubelet[1946]: I0910 00:51:25.658055 1946 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:51:25.658161 kubelet[1946]: I0910 00:51:25.658148 1946 policy_none.go:49] "None policy: Start" Sep 10 00:51:25.658793 kubelet[1946]: I0910 00:51:25.658773 1946 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:51:25.658872 kubelet[1946]: I0910 00:51:25.658800 1946 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:51:25.658967 kubelet[1946]: I0910 00:51:25.658952 1946 state_mem.go:75] "Updated machine memory state" Sep 10 00:51:25.662901 kubelet[1946]: I0910 00:51:25.662871 1946 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:51:25.663092 kubelet[1946]: I0910 00:51:25.663015 1946 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:51:25.663092 kubelet[1946]: I0910 00:51:25.663030 1946 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:51:25.663286 kubelet[1946]: I0910 00:51:25.663254 1946 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:51:25.766767 kubelet[1946]: I0910 00:51:25.766721 1946 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:51:25.772254 kubelet[1946]: I0910 00:51:25.772213 1946 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 00:51:25.772451 kubelet[1946]: I0910 00:51:25.772289 1946 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:51:25.921086 kubelet[1946]: I0910 00:51:25.921045 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf6e6fd5c5ce8535aa1087e41c81d614-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf6e6fd5c5ce8535aa1087e41c81d614\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:51:25.921086 kubelet[1946]: I0910 00:51:25.921081 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:51:25.921225 kubelet[1946]: I0910 00:51:25.921096 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:51:25.921225 kubelet[1946]: I0910 00:51:25.921113 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:51:25.921225 kubelet[1946]: I0910 00:51:25.921127 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf6e6fd5c5ce8535aa1087e41c81d614-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bf6e6fd5c5ce8535aa1087e41c81d614\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:51:25.921308 kubelet[1946]: I0910 00:51:25.921231 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf6e6fd5c5ce8535aa1087e41c81d614-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bf6e6fd5c5ce8535aa1087e41c81d614\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:51:25.921308 kubelet[1946]: I0910 00:51:25.921275 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:51:25.921308 kubelet[1946]: I0910 00:51:25.921291 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:51:25.921392 kubelet[1946]: I0910 00:51:25.921310 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:51:26.018808 sudo[1981]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 00:51:26.019005 sudo[1981]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 10 00:51:26.038885 kubelet[1946]: E0910 00:51:26.038844 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:26.039057 kubelet[1946]: E0910 00:51:26.038940 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:26.039182 kubelet[1946]: E0910 00:51:26.039139 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:26.564919 sudo[1981]: pam_unix(sudo:session): session closed for user root Sep 10 00:51:26.610549 kubelet[1946]: I0910 00:51:26.610512 1946 apiserver.go:52] "Watching apiserver" Sep 10 00:51:26.620505 kubelet[1946]: I0910 00:51:26.620469 1946 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:51:26.642035 kubelet[1946]: E0910 00:51:26.642002 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:26.642186 kubelet[1946]: E0910 00:51:26.642114 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:26.747119 kubelet[1946]: E0910 00:51:26.747076 1946 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:51:26.747282 kubelet[1946]: E0910 00:51:26.747233 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:26.756096 kubelet[1946]: I0910 00:51:26.756043 1946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.75591665 podStartE2EDuration="1.75591665s" podCreationTimestamp="2025-09-10 00:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:51:26.747629074 +0000 UTC m=+1.209383855" watchObservedRunningTime="2025-09-10 00:51:26.75591665 +0000 UTC m=+1.217671421" Sep 10 00:51:26.763601 kubelet[1946]: I0910 00:51:26.763545 1946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7635313529999999 podStartE2EDuration="1.763531353s" podCreationTimestamp="2025-09-10 00:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:51:26.763525863 +0000 UTC m=+1.225280644" watchObservedRunningTime="2025-09-10 00:51:26.763531353 +0000 UTC m=+1.225286134" Sep 10 00:51:26.763822 kubelet[1946]: I0910 00:51:26.763794 1946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7637857810000002 podStartE2EDuration="1.763785781s" podCreationTimestamp="2025-09-10 00:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:51:26.756177891 +0000 UTC m=+1.217932672" watchObservedRunningTime="2025-09-10 00:51:26.763785781 +0000 UTC m=+1.225540582" Sep 10 00:51:27.643732 kubelet[1946]: E0910 00:51:27.643684 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:28.223527 sudo[1320]: pam_unix(sudo:session): session closed for user root Sep 10 00:51:28.224854 sshd[1317]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:28.227598 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:60802.service: Deactivated successfully. Sep 10 00:51:28.228330 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:51:28.228464 systemd[1]: session-7.scope: Consumed 4.481s CPU time. Sep 10 00:51:28.228984 systemd-logind[1186]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:51:28.229833 systemd-logind[1186]: Removed session 7. Sep 10 00:51:28.645005 kubelet[1946]: E0910 00:51:28.644890 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:30.890565 kubelet[1946]: I0910 00:51:30.890529 1946 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:51:30.891049 env[1203]: time="2025-09-10T00:51:30.890997156Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:51:30.891247 kubelet[1946]: I0910 00:51:30.891170 1946 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:51:32.082226 systemd[1]: Created slice kubepods-besteffort-podaca44a8d_d60b_45fd_9833_79b402e7c79b.slice. Sep 10 00:51:32.098365 systemd[1]: Created slice kubepods-burstable-pod4864cb9b_04e2_4260_b12c_2f6c967369f1.slice. Sep 10 00:51:32.123991 systemd[1]: Created slice kubepods-besteffort-pod3a04883d_8394_4dff_b7dd_76131a95da99.slice. Sep 10 00:51:32.258055 kubelet[1946]: I0910 00:51:32.257998 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aca44a8d-d60b-45fd-9833-79b402e7c79b-kube-proxy\") pod \"kube-proxy-nd4qk\" (UID: \"aca44a8d-d60b-45fd-9833-79b402e7c79b\") " pod="kube-system/kube-proxy-nd4qk" Sep 10 00:51:32.258055 kubelet[1946]: I0910 00:51:32.258043 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-etc-cni-netd\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.258469 kubelet[1946]: I0910 00:51:32.258070 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-host-proc-sys-kernel\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.258469 kubelet[1946]: I0910 00:51:32.258095 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-cilium-cgroup\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.258469 kubelet[1946]: I0910 00:51:32.258116 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-lib-modules\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.258469 kubelet[1946]: I0910 00:51:32.258138 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4864cb9b-04e2-4260-b12c-2f6c967369f1-clustermesh-secrets\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.258469 kubelet[1946]: I0910 00:51:32.258177 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-cilium-run\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.258469 kubelet[1946]: I0910 00:51:32.258248 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4864cb9b-04e2-4260-b12c-2f6c967369f1-hubble-tls\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.258842 kubelet[1946]: I0910 00:51:32.258277 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-hostproc\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.258842 kubelet[1946]: I0910 00:51:32.258290 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-cni-path\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.258842 kubelet[1946]: I0910 00:51:32.258307 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k65lr\" (UniqueName: \"kubernetes.io/projected/4864cb9b-04e2-4260-b12c-2f6c967369f1-kube-api-access-k65lr\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.258842 kubelet[1946]: I0910 00:51:32.258342 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nq69\" (UniqueName: \"kubernetes.io/projected/aca44a8d-d60b-45fd-9833-79b402e7c79b-kube-api-access-4nq69\") pod \"kube-proxy-nd4qk\" (UID: \"aca44a8d-d60b-45fd-9833-79b402e7c79b\") " pod="kube-system/kube-proxy-nd4qk" Sep 10 00:51:32.258842 kubelet[1946]: I0910 00:51:32.258376 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-xtables-lock\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.258972 kubelet[1946]: I0910 00:51:32.258390 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4864cb9b-04e2-4260-b12c-2f6c967369f1-cilium-config-path\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.258972 kubelet[1946]: I0910 00:51:32.258406 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-host-proc-sys-net\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.258972 kubelet[1946]: I0910 00:51:32.258419 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a04883d-8394-4dff-b7dd-76131a95da99-cilium-config-path\") pod \"cilium-operator-5d85765b45-p7mdj\" (UID: \"3a04883d-8394-4dff-b7dd-76131a95da99\") " pod="kube-system/cilium-operator-5d85765b45-p7mdj" Sep 10 00:51:32.258972 kubelet[1946]: I0910 00:51:32.258432 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csknd\" (UniqueName: \"kubernetes.io/projected/3a04883d-8394-4dff-b7dd-76131a95da99-kube-api-access-csknd\") pod \"cilium-operator-5d85765b45-p7mdj\" (UID: \"3a04883d-8394-4dff-b7dd-76131a95da99\") " pod="kube-system/cilium-operator-5d85765b45-p7mdj" Sep 10 00:51:32.258972 kubelet[1946]: I0910 00:51:32.258468 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aca44a8d-d60b-45fd-9833-79b402e7c79b-xtables-lock\") pod \"kube-proxy-nd4qk\" (UID: \"aca44a8d-d60b-45fd-9833-79b402e7c79b\") " pod="kube-system/kube-proxy-nd4qk" Sep 10 00:51:32.259121 kubelet[1946]: I0910 00:51:32.258485 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-bpf-maps\") pod \"cilium-6fzwb\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " pod="kube-system/cilium-6fzwb" Sep 10 00:51:32.259121 kubelet[1946]: I0910 00:51:32.258498 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aca44a8d-d60b-45fd-9833-79b402e7c79b-lib-modules\") pod \"kube-proxy-nd4qk\" (UID: \"aca44a8d-d60b-45fd-9833-79b402e7c79b\") " pod="kube-system/kube-proxy-nd4qk" Sep 10 00:51:32.360444 kubelet[1946]: I0910 00:51:32.359447 1946 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 10 00:51:32.395000 kubelet[1946]: E0910 00:51:32.394949 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:32.395606 env[1203]: time="2025-09-10T00:51:32.395542419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nd4qk,Uid:aca44a8d-d60b-45fd-9833-79b402e7c79b,Namespace:kube-system,Attempt:0,}" Sep 10 00:51:32.403007 kubelet[1946]: E0910 00:51:32.402980 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:32.403536 env[1203]: time="2025-09-10T00:51:32.403496963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6fzwb,Uid:4864cb9b-04e2-4260-b12c-2f6c967369f1,Namespace:kube-system,Attempt:0,}" Sep 10 00:51:32.413305 env[1203]: time="2025-09-10T00:51:32.413223692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:51:32.413305 env[1203]: time="2025-09-10T00:51:32.413272228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:51:32.413305 env[1203]: time="2025-09-10T00:51:32.413288640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:51:32.413538 env[1203]: time="2025-09-10T00:51:32.413470909Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2184d7b7b2646c45373e93f322090262ac9adb7b5b2da8a47945ba5c4baaa73 pid=2040 runtime=io.containerd.runc.v2 Sep 10 00:51:32.423270 env[1203]: time="2025-09-10T00:51:32.423075398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:51:32.423270 env[1203]: time="2025-09-10T00:51:32.423118112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:51:32.423270 env[1203]: time="2025-09-10T00:51:32.423132090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:51:32.425073 systemd[1]: Started cri-containerd-a2184d7b7b2646c45373e93f322090262ac9adb7b5b2da8a47945ba5c4baaa73.scope. Sep 10 00:51:32.427768 kubelet[1946]: E0910 00:51:32.427735 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:32.428651 env[1203]: time="2025-09-10T00:51:32.428608811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-p7mdj,Uid:3a04883d-8394-4dff-b7dd-76131a95da99,Namespace:kube-system,Attempt:0,}" Sep 10 00:51:32.430294 env[1203]: time="2025-09-10T00:51:32.430236752Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909 pid=2065 runtime=io.containerd.runc.v2 Sep 10 00:51:32.438921 kubelet[1946]: E0910 00:51:32.438855 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:32.447735 systemd[1]: Started cri-containerd-99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909.scope. Sep 10 00:51:32.457033 env[1203]: time="2025-09-10T00:51:32.456981521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nd4qk,Uid:aca44a8d-d60b-45fd-9833-79b402e7c79b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2184d7b7b2646c45373e93f322090262ac9adb7b5b2da8a47945ba5c4baaa73\"" Sep 10 00:51:32.458141 kubelet[1946]: E0910 00:51:32.458114 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:32.461017 env[1203]: time="2025-09-10T00:51:32.459935321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:51:32.461017 env[1203]: time="2025-09-10T00:51:32.459990459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:51:32.461017 env[1203]: time="2025-09-10T00:51:32.460000819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:51:32.461017 env[1203]: time="2025-09-10T00:51:32.460186194Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d7f82f057bfc7bbbeb15eb976789b7708c23b26c3042be041345d37629b5b1b pid=2112 runtime=io.containerd.runc.v2 Sep 10 00:51:32.461385 env[1203]: time="2025-09-10T00:51:32.461187573Z" level=info msg="CreateContainer within sandbox \"a2184d7b7b2646c45373e93f322090262ac9adb7b5b2da8a47945ba5c4baaa73\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:51:32.478855 systemd[1]: Started cri-containerd-4d7f82f057bfc7bbbeb15eb976789b7708c23b26c3042be041345d37629b5b1b.scope. Sep 10 00:51:32.481066 env[1203]: time="2025-09-10T00:51:32.481024116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6fzwb,Uid:4864cb9b-04e2-4260-b12c-2f6c967369f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\"" Sep 10 00:51:32.482057 kubelet[1946]: E0910 00:51:32.482031 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:32.484416 env[1203]: time="2025-09-10T00:51:32.484388081Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 00:51:32.493268 env[1203]: time="2025-09-10T00:51:32.493228489Z" level=info msg="CreateContainer within sandbox \"a2184d7b7b2646c45373e93f322090262ac9adb7b5b2da8a47945ba5c4baaa73\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"71b11bbbaabd3354013644ee6291c56a4910f13c85b4fa0af87795590a3204d9\"" Sep 10 00:51:32.494849 env[1203]: time="2025-09-10T00:51:32.494823806Z" level=info msg="StartContainer for \"71b11bbbaabd3354013644ee6291c56a4910f13c85b4fa0af87795590a3204d9\"" Sep 10 00:51:32.510157 systemd[1]: Started cri-containerd-71b11bbbaabd3354013644ee6291c56a4910f13c85b4fa0af87795590a3204d9.scope. Sep 10 00:51:32.518452 env[1203]: time="2025-09-10T00:51:32.518402367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-p7mdj,Uid:3a04883d-8394-4dff-b7dd-76131a95da99,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d7f82f057bfc7bbbeb15eb976789b7708c23b26c3042be041345d37629b5b1b\"" Sep 10 00:51:32.520891 kubelet[1946]: E0910 00:51:32.520835 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:32.538969 env[1203]: time="2025-09-10T00:51:32.538912345Z" level=info msg="StartContainer for \"71b11bbbaabd3354013644ee6291c56a4910f13c85b4fa0af87795590a3204d9\" returns successfully" Sep 10 00:51:32.655474 kubelet[1946]: E0910 00:51:32.655320 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:32.656182 kubelet[1946]: E0910 00:51:32.656058 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:35.659717 kubelet[1946]: E0910 00:51:35.659673 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:35.709182 kubelet[1946]: I0910 00:51:35.709101 1946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nd4qk" podStartSLOduration=3.709077201 podStartE2EDuration="3.709077201s" podCreationTimestamp="2025-09-10 00:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:51:32.713552721 +0000 UTC m=+7.175307502" watchObservedRunningTime="2025-09-10 00:51:35.709077201 +0000 UTC m=+10.170831982" Sep 10 00:51:36.665776 kubelet[1946]: E0910 00:51:36.664251 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:36.827359 kubelet[1946]: E0910 00:51:36.827308 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:37.851331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1397273899.mount: Deactivated successfully. Sep 10 00:51:42.531011 env[1203]: time="2025-09-10T00:51:42.530938253Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:42.532995 env[1203]: time="2025-09-10T00:51:42.532909723Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:42.534889 env[1203]: time="2025-09-10T00:51:42.534850594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:42.535436 env[1203]: time="2025-09-10T00:51:42.535398536Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 10 00:51:42.537157 env[1203]: time="2025-09-10T00:51:42.537124351Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 00:51:42.539518 env[1203]: time="2025-09-10T00:51:42.538556676Z" level=info msg="CreateContainer within sandbox \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:51:42.552437 env[1203]: time="2025-09-10T00:51:42.552387449Z" level=info msg="CreateContainer within sandbox \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700\"" Sep 10 00:51:42.553539 env[1203]: time="2025-09-10T00:51:42.553464787Z" level=info msg="StartContainer for \"9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700\"" Sep 10 00:51:42.575360 systemd[1]: run-containerd-runc-k8s.io-9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700-runc.BjUWC7.mount: Deactivated successfully. Sep 10 00:51:42.580706 systemd[1]: Started cri-containerd-9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700.scope. Sep 10 00:51:42.614074 systemd[1]: cri-containerd-9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700.scope: Deactivated successfully. Sep 10 00:51:42.661611 env[1203]: time="2025-09-10T00:51:42.661515596Z" level=info msg="StartContainer for \"9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700\" returns successfully" Sep 10 00:51:42.674612 kubelet[1946]: E0910 00:51:42.674536 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:43.013108 env[1203]: time="2025-09-10T00:51:43.013031922Z" level=info msg="shim disconnected" id=9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700 Sep 10 00:51:43.013108 env[1203]: time="2025-09-10T00:51:43.013102288Z" level=warning msg="cleaning up after shim disconnected" id=9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700 namespace=k8s.io Sep 10 00:51:43.013108 env[1203]: time="2025-09-10T00:51:43.013117408Z" level=info msg="cleaning up dead shim" Sep 10 00:51:43.020360 env[1203]: time="2025-09-10T00:51:43.020321424Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:51:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2377 runtime=io.containerd.runc.v2\n" Sep 10 00:51:43.549906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700-rootfs.mount: Deactivated successfully. Sep 10 00:51:43.677254 kubelet[1946]: E0910 00:51:43.677158 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:43.680080 env[1203]: time="2025-09-10T00:51:43.679978014Z" level=info msg="CreateContainer within sandbox \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:51:43.697508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598899049.mount: Deactivated successfully. Sep 10 00:51:43.703107 env[1203]: time="2025-09-10T00:51:43.701611115Z" level=info msg="CreateContainer within sandbox \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4\"" Sep 10 00:51:43.703107 env[1203]: time="2025-09-10T00:51:43.702496790Z" level=info msg="StartContainer for \"a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4\"" Sep 10 00:51:43.720588 systemd[1]: Started cri-containerd-a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4.scope. Sep 10 00:51:43.753842 env[1203]: time="2025-09-10T00:51:43.753729395Z" level=info msg="StartContainer for \"a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4\" returns successfully" Sep 10 00:51:43.765251 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:51:43.765605 systemd[1]: Stopped systemd-sysctl.service. Sep 10 00:51:43.765798 systemd[1]: Stopping systemd-sysctl.service... Sep 10 00:51:43.767538 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:51:43.768983 systemd[1]: cri-containerd-a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4.scope: Deactivated successfully. Sep 10 00:51:43.777298 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:51:43.792431 env[1203]: time="2025-09-10T00:51:43.792372527Z" level=info msg="shim disconnected" id=a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4 Sep 10 00:51:43.792646 env[1203]: time="2025-09-10T00:51:43.792435058Z" level=warning msg="cleaning up after shim disconnected" id=a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4 namespace=k8s.io Sep 10 00:51:43.792646 env[1203]: time="2025-09-10T00:51:43.792451099Z" level=info msg="cleaning up dead shim" Sep 10 00:51:43.799424 env[1203]: time="2025-09-10T00:51:43.799359012Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:51:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2438 runtime=io.containerd.runc.v2\n" Sep 10 00:51:44.550358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4-rootfs.mount: Deactivated successfully. Sep 10 00:51:44.680948 kubelet[1946]: E0910 00:51:44.680893 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:44.696002 env[1203]: time="2025-09-10T00:51:44.695943739Z" level=info msg="CreateContainer within sandbox \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:51:44.712040 env[1203]: time="2025-09-10T00:51:44.711972130Z" level=info msg="CreateContainer within sandbox \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72\"" Sep 10 00:51:44.712694 env[1203]: time="2025-09-10T00:51:44.712667895Z" level=info msg="StartContainer for \"28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72\"" Sep 10 00:51:44.754773 systemd[1]: Started cri-containerd-28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72.scope. Sep 10 00:51:44.802105 systemd[1]: cri-containerd-28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72.scope: Deactivated successfully. Sep 10 00:51:44.874925 env[1203]: time="2025-09-10T00:51:44.874846428Z" level=info msg="StartContainer for \"28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72\" returns successfully" Sep 10 00:51:44.937610 env[1203]: time="2025-09-10T00:51:44.937536317Z" level=info msg="shim disconnected" id=28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72 Sep 10 00:51:44.937610 env[1203]: time="2025-09-10T00:51:44.937602515Z" level=warning msg="cleaning up after shim disconnected" id=28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72 namespace=k8s.io Sep 10 00:51:44.937610 env[1203]: time="2025-09-10T00:51:44.937615070Z" level=info msg="cleaning up dead shim" Sep 10 00:51:44.950028 env[1203]: time="2025-09-10T00:51:44.949961830Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:51:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2493 runtime=io.containerd.runc.v2\n" Sep 10 00:51:45.040172 env[1203]: time="2025-09-10T00:51:45.040099659Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:45.042127 env[1203]: time="2025-09-10T00:51:45.042075928Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:45.043680 env[1203]: time="2025-09-10T00:51:45.043646111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:51:45.044202 env[1203]: time="2025-09-10T00:51:45.044176166Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 10 00:51:45.053480 env[1203]: time="2025-09-10T00:51:45.053375333Z" level=info msg="CreateContainer within sandbox \"4d7f82f057bfc7bbbeb15eb976789b7708c23b26c3042be041345d37629b5b1b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 00:51:45.086546 env[1203]: time="2025-09-10T00:51:45.086482147Z" level=info msg="CreateContainer within sandbox \"4d7f82f057bfc7bbbeb15eb976789b7708c23b26c3042be041345d37629b5b1b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\"" Sep 10 00:51:45.087325 env[1203]: time="2025-09-10T00:51:45.087275120Z" level=info msg="StartContainer for \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\"" Sep 10 00:51:45.103474 systemd[1]: Started cri-containerd-6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33.scope. Sep 10 00:51:45.130096 env[1203]: time="2025-09-10T00:51:45.130038565Z" level=info msg="StartContainer for \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\" returns successfully" Sep 10 00:51:45.551159 systemd[1]: run-containerd-runc-k8s.io-28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72-runc.ZTWi4Z.mount: Deactivated successfully. Sep 10 00:51:45.551602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72-rootfs.mount: Deactivated successfully. Sep 10 00:51:45.682980 kubelet[1946]: E0910 00:51:45.682910 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:45.684370 kubelet[1946]: E0910 00:51:45.684339 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:45.685741 env[1203]: time="2025-09-10T00:51:45.685707267Z" level=info msg="CreateContainer within sandbox \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:51:45.851138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3510652998.mount: Deactivated successfully. Sep 10 00:51:45.856248 env[1203]: time="2025-09-10T00:51:45.856152622Z" level=info msg="CreateContainer within sandbox \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf\"" Sep 10 00:51:45.857431 env[1203]: time="2025-09-10T00:51:45.857397377Z" level=info msg="StartContainer for \"597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf\"" Sep 10 00:51:45.898080 systemd[1]: Started cri-containerd-597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf.scope. Sep 10 00:51:45.914852 kubelet[1946]: I0910 00:51:45.911300 1946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-p7mdj" podStartSLOduration=1.38755925 podStartE2EDuration="13.911273967s" podCreationTimestamp="2025-09-10 00:51:32 +0000 UTC" firstStartedPulling="2025-09-10 00:51:32.521964804 +0000 UTC m=+6.983719585" lastFinishedPulling="2025-09-10 00:51:45.045679521 +0000 UTC m=+19.507434302" observedRunningTime="2025-09-10 00:51:45.881794041 +0000 UTC m=+20.343548822" watchObservedRunningTime="2025-09-10 00:51:45.911273967 +0000 UTC m=+20.373028748" Sep 10 00:51:45.936652 env[1203]: time="2025-09-10T00:51:45.936592342Z" level=info msg="StartContainer for \"597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf\" returns successfully" Sep 10 00:51:45.942437 systemd[1]: cri-containerd-597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf.scope: Deactivated successfully. Sep 10 00:51:46.019041 env[1203]: time="2025-09-10T00:51:46.018979986Z" level=info msg="shim disconnected" id=597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf Sep 10 00:51:46.019329 env[1203]: time="2025-09-10T00:51:46.019308580Z" level=warning msg="cleaning up after shim disconnected" id=597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf namespace=k8s.io Sep 10 00:51:46.019429 env[1203]: time="2025-09-10T00:51:46.019411509Z" level=info msg="cleaning up dead shim" Sep 10 00:51:46.027031 env[1203]: time="2025-09-10T00:51:46.026971395Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:51:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2588 runtime=io.containerd.runc.v2\n" Sep 10 00:51:46.550350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf-rootfs.mount: Deactivated successfully. Sep 10 00:51:46.690546 kubelet[1946]: E0910 00:51:46.690482 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:46.690957 kubelet[1946]: E0910 00:51:46.690773 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:46.703345 env[1203]: time="2025-09-10T00:51:46.701951241Z" level=info msg="CreateContainer within sandbox \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:51:46.728417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074426147.mount: Deactivated successfully. Sep 10 00:51:46.734589 env[1203]: time="2025-09-10T00:51:46.734519561Z" level=info msg="CreateContainer within sandbox \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\"" Sep 10 00:51:46.735331 env[1203]: time="2025-09-10T00:51:46.735282113Z" level=info msg="StartContainer for \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\"" Sep 10 00:51:46.750601 systemd[1]: Started cri-containerd-4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468.scope. Sep 10 00:51:46.777631 env[1203]: time="2025-09-10T00:51:46.777567376Z" level=info msg="StartContainer for \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\" returns successfully" Sep 10 00:51:46.874852 kubelet[1946]: I0910 00:51:46.874684 1946 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 00:51:46.905868 systemd[1]: Created slice kubepods-burstable-pod2c28079f_2873_46a8_9b0f_59e46f985aef.slice. Sep 10 00:51:46.913587 systemd[1]: Created slice kubepods-burstable-pod1a8c261e_3773_41a3_b13c_62dd494f6df8.slice. Sep 10 00:51:47.068827 kubelet[1946]: I0910 00:51:47.068770 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c28079f-2873-46a8-9b0f-59e46f985aef-config-volume\") pod \"coredns-7c65d6cfc9-ch57f\" (UID: \"2c28079f-2873-46a8-9b0f-59e46f985aef\") " pod="kube-system/coredns-7c65d6cfc9-ch57f" Sep 10 00:51:47.068827 kubelet[1946]: I0910 00:51:47.068810 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a8c261e-3773-41a3-b13c-62dd494f6df8-config-volume\") pod \"coredns-7c65d6cfc9-d82mg\" (UID: \"1a8c261e-3773-41a3-b13c-62dd494f6df8\") " pod="kube-system/coredns-7c65d6cfc9-d82mg" Sep 10 00:51:47.068827 kubelet[1946]: I0910 00:51:47.068830 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhrl4\" (UniqueName: \"kubernetes.io/projected/1a8c261e-3773-41a3-b13c-62dd494f6df8-kube-api-access-dhrl4\") pod \"coredns-7c65d6cfc9-d82mg\" (UID: \"1a8c261e-3773-41a3-b13c-62dd494f6df8\") " pod="kube-system/coredns-7c65d6cfc9-d82mg" Sep 10 00:51:47.069018 kubelet[1946]: I0910 00:51:47.068879 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc85j\" (UniqueName: \"kubernetes.io/projected/2c28079f-2873-46a8-9b0f-59e46f985aef-kube-api-access-wc85j\") pod \"coredns-7c65d6cfc9-ch57f\" (UID: \"2c28079f-2873-46a8-9b0f-59e46f985aef\") " pod="kube-system/coredns-7c65d6cfc9-ch57f" Sep 10 00:51:47.088874 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:35026.service. Sep 10 00:51:47.132979 sshd[2708]: Accepted publickey for core from 10.0.0.1 port 35026 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:51:47.134401 sshd[2708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:47.141197 systemd[1]: Started session-8.scope. Sep 10 00:51:47.142626 systemd-logind[1186]: New session 8 of user core. Sep 10 00:51:47.215249 kubelet[1946]: E0910 00:51:47.215194 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:47.216148 env[1203]: time="2025-09-10T00:51:47.216093027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ch57f,Uid:2c28079f-2873-46a8-9b0f-59e46f985aef,Namespace:kube-system,Attempt:0,}" Sep 10 00:51:47.217914 kubelet[1946]: E0910 00:51:47.217880 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:47.218348 env[1203]: time="2025-09-10T00:51:47.218291739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d82mg,Uid:1a8c261e-3773-41a3-b13c-62dd494f6df8,Namespace:kube-system,Attempt:0,}" Sep 10 00:51:47.322565 sshd[2708]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:47.324825 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:35026.service: Deactivated successfully. Sep 10 00:51:47.325522 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:51:47.326011 systemd-logind[1186]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:51:47.326659 systemd-logind[1186]: Removed session 8. Sep 10 00:51:47.695843 kubelet[1946]: E0910 00:51:47.694135 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:47.715060 kubelet[1946]: I0910 00:51:47.714999 1946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6fzwb" podStartSLOduration=5.662056373 podStartE2EDuration="15.714976251s" podCreationTimestamp="2025-09-10 00:51:32 +0000 UTC" firstStartedPulling="2025-09-10 00:51:32.483910984 +0000 UTC m=+6.945665765" lastFinishedPulling="2025-09-10 00:51:42.536830862 +0000 UTC m=+16.998585643" observedRunningTime="2025-09-10 00:51:47.712047421 +0000 UTC m=+22.173802202" watchObservedRunningTime="2025-09-10 00:51:47.714976251 +0000 UTC m=+22.176731022" Sep 10 00:51:48.696500 kubelet[1946]: E0910 00:51:48.696457 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:48.771909 systemd-networkd[1033]: cilium_host: Link UP Sep 10 00:51:48.772050 systemd-networkd[1033]: cilium_net: Link UP Sep 10 00:51:48.772055 systemd-networkd[1033]: cilium_net: Gained carrier Sep 10 00:51:48.772240 systemd-networkd[1033]: cilium_host: Gained carrier Sep 10 00:51:48.773692 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 10 00:51:48.783710 systemd-networkd[1033]: cilium_host: Gained IPv6LL Sep 10 00:51:48.858128 systemd-networkd[1033]: cilium_vxlan: Link UP Sep 10 00:51:48.858136 systemd-networkd[1033]: cilium_vxlan: Gained carrier Sep 10 00:51:49.061622 kernel: NET: Registered PF_ALG protocol family Sep 10 00:51:49.564718 systemd-networkd[1033]: cilium_net: Gained IPv6LL Sep 10 00:51:49.623631 systemd-networkd[1033]: lxc_health: Link UP Sep 10 00:51:49.632525 systemd-networkd[1033]: lxc_health: Gained carrier Sep 10 00:51:49.632642 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 10 00:51:49.698254 kubelet[1946]: E0910 00:51:49.698208 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:49.719626 kernel: eth0: renamed from tmp64c54 Sep 10 00:51:49.723765 systemd-networkd[1033]: lxc8e129a3f3b9c: Link UP Sep 10 00:51:49.724337 systemd-networkd[1033]: lxc8e129a3f3b9c: Gained carrier Sep 10 00:51:49.728458 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8e129a3f3b9c: link becomes ready Sep 10 00:51:49.730229 systemd-networkd[1033]: lxc11a3636f8bd1: Link UP Sep 10 00:51:49.741638 kernel: eth0: renamed from tmp1a5ff Sep 10 00:51:49.749311 systemd-networkd[1033]: lxc11a3636f8bd1: Gained carrier Sep 10 00:51:49.749643 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc11a3636f8bd1: link becomes ready Sep 10 00:51:50.343103 systemd-networkd[1033]: cilium_vxlan: Gained IPv6LL Sep 10 00:51:50.703942 kubelet[1946]: E0910 00:51:50.703887 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:50.971792 systemd-networkd[1033]: lxc_health: Gained IPv6LL Sep 10 00:51:51.419788 systemd-networkd[1033]: lxc11a3636f8bd1: Gained IPv6LL Sep 10 00:51:51.703355 kubelet[1946]: E0910 00:51:51.703218 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:51.739890 systemd-networkd[1033]: lxc8e129a3f3b9c: Gained IPv6LL Sep 10 00:51:52.327209 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:38884.service. Sep 10 00:51:52.370094 sshd[3161]: Accepted publickey for core from 10.0.0.1 port 38884 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:51:52.371890 sshd[3161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:52.376005 systemd-logind[1186]: New session 9 of user core. Sep 10 00:51:52.377346 systemd[1]: Started session-9.scope. Sep 10 00:51:52.489805 sshd[3161]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:52.492302 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:38884.service: Deactivated successfully. Sep 10 00:51:52.493031 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:51:52.493505 systemd-logind[1186]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:51:52.494311 systemd-logind[1186]: Removed session 9. Sep 10 00:51:53.088750 env[1203]: time="2025-09-10T00:51:53.088680932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:51:53.088750 env[1203]: time="2025-09-10T00:51:53.088720819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:51:53.088750 env[1203]: time="2025-09-10T00:51:53.088730507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:51:53.089162 env[1203]: time="2025-09-10T00:51:53.088878431Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a5ff9029dd4862d01622a9206b3775d989dbace544fe990a1123a4459bf36a2 pid=3191 runtime=io.containerd.runc.v2 Sep 10 00:51:53.091087 env[1203]: time="2025-09-10T00:51:53.091009627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:51:53.091087 env[1203]: time="2025-09-10T00:51:53.091051498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:51:53.091087 env[1203]: time="2025-09-10T00:51:53.091061867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:51:53.091334 env[1203]: time="2025-09-10T00:51:53.091275207Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/64c543c709461df11311988fe146e12366d1d80965ec0ee60f00f9e7f633aec9 pid=3199 runtime=io.containerd.runc.v2 Sep 10 00:51:53.103956 systemd[1]: Started cri-containerd-1a5ff9029dd4862d01622a9206b3775d989dbace544fe990a1123a4459bf36a2.scope. Sep 10 00:51:53.112129 systemd[1]: Started cri-containerd-64c543c709461df11311988fe146e12366d1d80965ec0ee60f00f9e7f633aec9.scope. Sep 10 00:51:53.116563 systemd-resolved[1142]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:51:53.125498 systemd-resolved[1142]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:51:53.139310 env[1203]: time="2025-09-10T00:51:53.139267562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d82mg,Uid:1a8c261e-3773-41a3-b13c-62dd494f6df8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a5ff9029dd4862d01622a9206b3775d989dbace544fe990a1123a4459bf36a2\"" Sep 10 00:51:53.141936 kubelet[1946]: E0910 00:51:53.141891 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:53.145584 env[1203]: time="2025-09-10T00:51:53.145520929Z" level=info msg="CreateContainer within sandbox \"1a5ff9029dd4862d01622a9206b3775d989dbace544fe990a1123a4459bf36a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:51:53.158138 env[1203]: time="2025-09-10T00:51:53.158087210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ch57f,Uid:2c28079f-2873-46a8-9b0f-59e46f985aef,Namespace:kube-system,Attempt:0,} returns sandbox id \"64c543c709461df11311988fe146e12366d1d80965ec0ee60f00f9e7f633aec9\"" Sep 10 00:51:53.158855 kubelet[1946]: E0910 00:51:53.158818 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:53.164533 env[1203]: time="2025-09-10T00:51:53.164491438Z" level=info msg="CreateContainer within sandbox \"64c543c709461df11311988fe146e12366d1d80965ec0ee60f00f9e7f633aec9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:51:53.167613 env[1203]: time="2025-09-10T00:51:53.167537673Z" level=info msg="CreateContainer within sandbox \"1a5ff9029dd4862d01622a9206b3775d989dbace544fe990a1123a4459bf36a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c01b3b614dc37826f2949750862ad3c490f456b758fa9b3ae7f08996f8f5e59\"" Sep 10 00:51:53.168005 env[1203]: time="2025-09-10T00:51:53.167961718Z" level=info msg="StartContainer for \"9c01b3b614dc37826f2949750862ad3c490f456b758fa9b3ae7f08996f8f5e59\"" Sep 10 00:51:53.178367 env[1203]: time="2025-09-10T00:51:53.178330594Z" level=info msg="CreateContainer within sandbox \"64c543c709461df11311988fe146e12366d1d80965ec0ee60f00f9e7f633aec9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e155c83e87a75f08b2f466b88bcd6d5bb6f624e5bd79f5bb4053640621786d4\"" Sep 10 00:51:53.180138 env[1203]: time="2025-09-10T00:51:53.180110265Z" level=info msg="StartContainer for \"4e155c83e87a75f08b2f466b88bcd6d5bb6f624e5bd79f5bb4053640621786d4\"" Sep 10 00:51:53.185406 systemd[1]: Started cri-containerd-9c01b3b614dc37826f2949750862ad3c490f456b758fa9b3ae7f08996f8f5e59.scope. Sep 10 00:51:53.205501 systemd[1]: Started cri-containerd-4e155c83e87a75f08b2f466b88bcd6d5bb6f624e5bd79f5bb4053640621786d4.scope. Sep 10 00:51:53.360205 env[1203]: time="2025-09-10T00:51:53.360070320Z" level=info msg="StartContainer for \"9c01b3b614dc37826f2949750862ad3c490f456b758fa9b3ae7f08996f8f5e59\" returns successfully" Sep 10 00:51:53.542540 env[1203]: time="2025-09-10T00:51:53.542468581Z" level=info msg="StartContainer for \"4e155c83e87a75f08b2f466b88bcd6d5bb6f624e5bd79f5bb4053640621786d4\" returns successfully" Sep 10 00:51:53.708221 kubelet[1946]: E0910 00:51:53.708176 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:53.709729 kubelet[1946]: E0910 00:51:53.709694 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:53.720045 kubelet[1946]: I0910 00:51:53.719986 1946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ch57f" podStartSLOduration=21.719971184 podStartE2EDuration="21.719971184s" podCreationTimestamp="2025-09-10 00:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:51:53.719288792 +0000 UTC m=+28.181043573" watchObservedRunningTime="2025-09-10 00:51:53.719971184 +0000 UTC m=+28.181725965" Sep 10 00:51:53.738918 kubelet[1946]: I0910 00:51:53.738839 1946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-d82mg" podStartSLOduration=21.738813525 podStartE2EDuration="21.738813525s" podCreationTimestamp="2025-09-10 00:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:51:53.738195848 +0000 UTC m=+28.199950639" watchObservedRunningTime="2025-09-10 00:51:53.738813525 +0000 UTC m=+28.200568306" Sep 10 00:51:54.094487 systemd[1]: run-containerd-runc-k8s.io-64c543c709461df11311988fe146e12366d1d80965ec0ee60f00f9e7f633aec9-runc.MEMGhS.mount: Deactivated successfully. Sep 10 00:51:54.711528 kubelet[1946]: E0910 00:51:54.711487 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:54.712018 kubelet[1946]: E0910 00:51:54.711487 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:55.715613 kubelet[1946]: E0910 00:51:55.712344 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:55.715613 kubelet[1946]: E0910 00:51:55.712533 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:51:57.498454 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:38900.service. Sep 10 00:51:57.556099 sshd[3348]: Accepted publickey for core from 10.0.0.1 port 38900 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:51:57.558282 sshd[3348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:51:57.565908 systemd-logind[1186]: New session 10 of user core. Sep 10 00:51:57.569304 systemd[1]: Started session-10.scope. Sep 10 00:51:57.744304 sshd[3348]: pam_unix(sshd:session): session closed for user core Sep 10 00:51:57.748842 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:38900.service: Deactivated successfully. Sep 10 00:51:57.749858 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:51:57.754067 systemd-logind[1186]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:51:57.761887 systemd-logind[1186]: Removed session 10. Sep 10 00:52:02.748757 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:47166.service. Sep 10 00:52:02.793414 sshd[3364]: Accepted publickey for core from 10.0.0.1 port 47166 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:02.794962 sshd[3364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:02.798594 systemd-logind[1186]: New session 11 of user core. Sep 10 00:52:02.799410 systemd[1]: Started session-11.scope. Sep 10 00:52:02.917212 sshd[3364]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:02.919863 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:47166.service: Deactivated successfully. Sep 10 00:52:02.920795 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:52:02.921852 systemd-logind[1186]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:52:02.922761 systemd-logind[1186]: Removed session 11. Sep 10 00:52:07.922274 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:47182.service. Sep 10 00:52:07.962896 sshd[3379]: Accepted publickey for core from 10.0.0.1 port 47182 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:07.963937 sshd[3379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:07.967662 systemd-logind[1186]: New session 12 of user core. Sep 10 00:52:07.968729 systemd[1]: Started session-12.scope. Sep 10 00:52:08.083184 sshd[3379]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:08.086358 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:47182.service: Deactivated successfully. Sep 10 00:52:08.086885 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:52:08.089072 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:47192.service. Sep 10 00:52:08.089676 systemd-logind[1186]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:52:08.090501 systemd-logind[1186]: Removed session 12. Sep 10 00:52:08.130776 sshd[3394]: Accepted publickey for core from 10.0.0.1 port 47192 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:08.131997 sshd[3394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:08.135370 systemd-logind[1186]: New session 13 of user core. Sep 10 00:52:08.136149 systemd[1]: Started session-13.scope. Sep 10 00:52:08.290562 sshd[3394]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:08.294756 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:47200.service. Sep 10 00:52:08.295309 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:47192.service: Deactivated successfully. Sep 10 00:52:08.295924 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:52:08.297182 systemd-logind[1186]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:52:08.298217 systemd-logind[1186]: Removed session 13. Sep 10 00:52:08.336973 sshd[3405]: Accepted publickey for core from 10.0.0.1 port 47200 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:08.338252 sshd[3405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:08.341550 systemd-logind[1186]: New session 14 of user core. Sep 10 00:52:08.342376 systemd[1]: Started session-14.scope. Sep 10 00:52:08.449516 sshd[3405]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:08.451930 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:47200.service: Deactivated successfully. Sep 10 00:52:08.452633 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:52:08.453101 systemd-logind[1186]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:52:08.453720 systemd-logind[1186]: Removed session 14. Sep 10 00:52:13.454083 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:38508.service. Sep 10 00:52:13.493906 sshd[3420]: Accepted publickey for core from 10.0.0.1 port 38508 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:13.495061 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:13.498437 systemd-logind[1186]: New session 15 of user core. Sep 10 00:52:13.499223 systemd[1]: Started session-15.scope. Sep 10 00:52:13.599780 sshd[3420]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:13.602252 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:38508.service: Deactivated successfully. Sep 10 00:52:13.603120 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:52:13.603800 systemd-logind[1186]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:52:13.604605 systemd-logind[1186]: Removed session 15. Sep 10 00:52:18.604242 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:38520.service. Sep 10 00:52:18.644141 sshd[3434]: Accepted publickey for core from 10.0.0.1 port 38520 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:18.645239 sshd[3434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:18.648233 systemd-logind[1186]: New session 16 of user core. Sep 10 00:52:18.648980 systemd[1]: Started session-16.scope. Sep 10 00:52:18.751777 sshd[3434]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:18.754828 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:38520.service: Deactivated successfully. Sep 10 00:52:18.755392 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:52:18.755943 systemd-logind[1186]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:52:18.757174 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:38524.service. Sep 10 00:52:18.758078 systemd-logind[1186]: Removed session 16. Sep 10 00:52:18.797512 sshd[3447]: Accepted publickey for core from 10.0.0.1 port 38524 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:18.798712 sshd[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:18.802032 systemd-logind[1186]: New session 17 of user core. Sep 10 00:52:18.802834 systemd[1]: Started session-17.scope. Sep 10 00:52:18.998386 sshd[3447]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:19.001088 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:38524.service: Deactivated successfully. Sep 10 00:52:19.001631 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:52:19.002341 systemd-logind[1186]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:52:19.003531 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:38540.service. Sep 10 00:52:19.004629 systemd-logind[1186]: Removed session 17. Sep 10 00:52:19.045932 sshd[3458]: Accepted publickey for core from 10.0.0.1 port 38540 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:19.047056 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:19.050891 systemd-logind[1186]: New session 18 of user core. Sep 10 00:52:19.051728 systemd[1]: Started session-18.scope. Sep 10 00:52:20.193381 sshd[3458]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:20.199691 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:36070.service. Sep 10 00:52:20.201270 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:38540.service: Deactivated successfully. Sep 10 00:52:20.201994 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:52:20.202717 systemd-logind[1186]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:52:20.203513 systemd-logind[1186]: Removed session 18. Sep 10 00:52:20.241661 sshd[3477]: Accepted publickey for core from 10.0.0.1 port 36070 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:20.243282 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:20.247332 systemd-logind[1186]: New session 19 of user core. Sep 10 00:52:20.248444 systemd[1]: Started session-19.scope. Sep 10 00:52:20.479877 sshd[3477]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:20.483866 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:36072.service. Sep 10 00:52:20.484603 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:36070.service: Deactivated successfully. Sep 10 00:52:20.486083 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:52:20.486838 systemd-logind[1186]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:52:20.487819 systemd-logind[1186]: Removed session 19. Sep 10 00:52:20.530551 sshd[3488]: Accepted publickey for core from 10.0.0.1 port 36072 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:20.532119 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:20.536066 systemd-logind[1186]: New session 20 of user core. Sep 10 00:52:20.536934 systemd[1]: Started session-20.scope. Sep 10 00:52:20.778146 sshd[3488]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:20.780811 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:36072.service: Deactivated successfully. Sep 10 00:52:20.781527 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:52:20.782073 systemd-logind[1186]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:52:20.782902 systemd-logind[1186]: Removed session 20. Sep 10 00:52:25.782663 systemd[1]: Started sshd@20-10.0.0.139:22-10.0.0.1:36088.service. Sep 10 00:52:25.823119 sshd[3504]: Accepted publickey for core from 10.0.0.1 port 36088 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:25.824405 sshd[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:25.827783 systemd-logind[1186]: New session 21 of user core. Sep 10 00:52:25.828568 systemd[1]: Started session-21.scope. Sep 10 00:52:25.930911 sshd[3504]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:25.933383 systemd[1]: sshd@20-10.0.0.139:22-10.0.0.1:36088.service: Deactivated successfully. Sep 10 00:52:25.934306 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:52:25.935080 systemd-logind[1186]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:52:25.935785 systemd-logind[1186]: Removed session 21. Sep 10 00:52:30.935571 systemd[1]: Started sshd@21-10.0.0.139:22-10.0.0.1:58062.service. Sep 10 00:52:30.975470 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 58062 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:30.976490 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:30.980018 systemd-logind[1186]: New session 22 of user core. Sep 10 00:52:30.980811 systemd[1]: Started session-22.scope. Sep 10 00:52:31.082819 sshd[3520]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:31.084881 systemd[1]: sshd@21-10.0.0.139:22-10.0.0.1:58062.service: Deactivated successfully. Sep 10 00:52:31.085550 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:52:31.086207 systemd-logind[1186]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:52:31.086993 systemd-logind[1186]: Removed session 22. Sep 10 00:52:36.089092 systemd[1]: Started sshd@22-10.0.0.139:22-10.0.0.1:58078.service. Sep 10 00:52:36.133764 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 58078 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:36.135527 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:36.140129 systemd-logind[1186]: New session 23 of user core. Sep 10 00:52:36.141302 systemd[1]: Started session-23.scope. Sep 10 00:52:36.265043 sshd[3535]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:36.268154 systemd[1]: sshd@22-10.0.0.139:22-10.0.0.1:58078.service: Deactivated successfully. Sep 10 00:52:36.269135 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:52:36.269793 systemd-logind[1186]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:52:36.270680 systemd-logind[1186]: Removed session 23. Sep 10 00:52:38.634074 kubelet[1946]: E0910 00:52:38.634033 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:41.269903 systemd[1]: Started sshd@23-10.0.0.139:22-10.0.0.1:48068.service. Sep 10 00:52:41.310345 sshd[3548]: Accepted publickey for core from 10.0.0.1 port 48068 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:41.311546 sshd[3548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:41.314757 systemd-logind[1186]: New session 24 of user core. Sep 10 00:52:41.315496 systemd[1]: Started session-24.scope. Sep 10 00:52:41.414622 sshd[3548]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:41.417737 systemd[1]: sshd@23-10.0.0.139:22-10.0.0.1:48068.service: Deactivated successfully. Sep 10 00:52:41.418290 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:52:41.418855 systemd-logind[1186]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:52:41.420075 systemd[1]: Started sshd@24-10.0.0.139:22-10.0.0.1:48078.service. Sep 10 00:52:41.420857 systemd-logind[1186]: Removed session 24. Sep 10 00:52:41.460345 sshd[3562]: Accepted publickey for core from 10.0.0.1 port 48078 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:41.461531 sshd[3562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:41.465003 systemd-logind[1186]: New session 25 of user core. Sep 10 00:52:41.465838 systemd[1]: Started session-25.scope. Sep 10 00:52:42.890775 env[1203]: time="2025-09-10T00:52:42.890674549Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:52:42.896116 env[1203]: time="2025-09-10T00:52:42.896087297Z" level=info msg="StopContainer for \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\" with timeout 2 (s)" Sep 10 00:52:42.896407 env[1203]: time="2025-09-10T00:52:42.896359804Z" level=info msg="Stop container \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\" with signal terminated" Sep 10 00:52:42.902429 systemd-networkd[1033]: lxc_health: Link DOWN Sep 10 00:52:42.902435 systemd-networkd[1033]: lxc_health: Lost carrier Sep 10 00:52:42.939087 systemd[1]: cri-containerd-4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468.scope: Deactivated successfully. Sep 10 00:52:42.939351 systemd[1]: cri-containerd-4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468.scope: Consumed 6.187s CPU time. Sep 10 00:52:42.954992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468-rootfs.mount: Deactivated successfully. Sep 10 00:52:43.078661 env[1203]: time="2025-09-10T00:52:43.078534312Z" level=info msg="StopContainer for \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\" with timeout 30 (s)" Sep 10 00:52:43.079493 env[1203]: time="2025-09-10T00:52:43.079318961Z" level=info msg="Stop container \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\" with signal terminated" Sep 10 00:52:43.081256 env[1203]: time="2025-09-10T00:52:43.081208486Z" level=info msg="shim disconnected" id=4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468 Sep 10 00:52:43.081256 env[1203]: time="2025-09-10T00:52:43.081245556Z" level=warning msg="cleaning up after shim disconnected" id=4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468 namespace=k8s.io Sep 10 00:52:43.081256 env[1203]: time="2025-09-10T00:52:43.081254393Z" level=info msg="cleaning up dead shim" Sep 10 00:52:43.089283 systemd[1]: cri-containerd-6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33.scope: Deactivated successfully. Sep 10 00:52:43.091233 env[1203]: time="2025-09-10T00:52:43.091188710Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:52:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3613 runtime=io.containerd.runc.v2\n" Sep 10 00:52:43.094874 env[1203]: time="2025-09-10T00:52:43.094757842Z" level=info msg="StopContainer for \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\" returns successfully" Sep 10 00:52:43.095487 env[1203]: time="2025-09-10T00:52:43.095461076Z" level=info msg="StopPodSandbox for \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\"" Sep 10 00:52:43.095547 env[1203]: time="2025-09-10T00:52:43.095530348Z" level=info msg="Container to stop \"a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:52:43.095598 env[1203]: time="2025-09-10T00:52:43.095545636Z" level=info msg="Container to stop \"597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:52:43.095598 env[1203]: time="2025-09-10T00:52:43.095556938Z" level=info msg="Container to stop \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:52:43.095598 env[1203]: time="2025-09-10T00:52:43.095567759Z" level=info msg="Container to stop \"9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:52:43.095697 env[1203]: time="2025-09-10T00:52:43.095598678Z" level=info msg="Container to stop \"28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:52:43.097408 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909-shm.mount: Deactivated successfully. Sep 10 00:52:43.103147 systemd[1]: cri-containerd-99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909.scope: Deactivated successfully. Sep 10 00:52:43.105463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33-rootfs.mount: Deactivated successfully. Sep 10 00:52:43.116049 env[1203]: time="2025-09-10T00:52:43.115977151Z" level=info msg="shim disconnected" id=6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33 Sep 10 00:52:43.116049 env[1203]: time="2025-09-10T00:52:43.116034669Z" level=warning msg="cleaning up after shim disconnected" id=6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33 namespace=k8s.io Sep 10 00:52:43.116049 env[1203]: time="2025-09-10T00:52:43.116043817Z" level=info msg="cleaning up dead shim" Sep 10 00:52:43.122409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909-rootfs.mount: Deactivated successfully. Sep 10 00:52:43.123773 env[1203]: time="2025-09-10T00:52:43.123717866Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:52:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3661 runtime=io.containerd.runc.v2\n" Sep 10 00:52:43.124885 env[1203]: time="2025-09-10T00:52:43.124830607Z" level=info msg="shim disconnected" id=99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909 Sep 10 00:52:43.124885 env[1203]: time="2025-09-10T00:52:43.124882896Z" level=warning msg="cleaning up after shim disconnected" id=99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909 namespace=k8s.io Sep 10 00:52:43.124977 env[1203]: time="2025-09-10T00:52:43.124893838Z" level=info msg="cleaning up dead shim" Sep 10 00:52:43.126361 env[1203]: time="2025-09-10T00:52:43.126321896Z" level=info msg="StopContainer for \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\" returns successfully" Sep 10 00:52:43.126923 env[1203]: time="2025-09-10T00:52:43.126896317Z" level=info msg="StopPodSandbox for \"4d7f82f057bfc7bbbeb15eb976789b7708c23b26c3042be041345d37629b5b1b\"" Sep 10 00:52:43.126998 env[1203]: time="2025-09-10T00:52:43.126956582Z" level=info msg="Container to stop \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:52:43.131647 env[1203]: time="2025-09-10T00:52:43.131618957Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:52:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3673 runtime=io.containerd.runc.v2\n" Sep 10 00:52:43.132199 env[1203]: time="2025-09-10T00:52:43.131882556Z" level=info msg="TearDown network for sandbox \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\" successfully" Sep 10 00:52:43.132199 env[1203]: time="2025-09-10T00:52:43.131909819Z" level=info msg="StopPodSandbox for \"99f2e68f0b02c885948a69cb843ee3fba535b3e73986bd66914671b1d9273909\" returns successfully" Sep 10 00:52:43.133412 systemd[1]: cri-containerd-4d7f82f057bfc7bbbeb15eb976789b7708c23b26c3042be041345d37629b5b1b.scope: Deactivated successfully. Sep 10 00:52:43.157880 env[1203]: time="2025-09-10T00:52:43.157726649Z" level=info msg="shim disconnected" id=4d7f82f057bfc7bbbeb15eb976789b7708c23b26c3042be041345d37629b5b1b Sep 10 00:52:43.157880 env[1203]: time="2025-09-10T00:52:43.157800860Z" level=warning msg="cleaning up after shim disconnected" id=4d7f82f057bfc7bbbeb15eb976789b7708c23b26c3042be041345d37629b5b1b namespace=k8s.io Sep 10 00:52:43.157880 env[1203]: time="2025-09-10T00:52:43.157809757Z" level=info msg="cleaning up dead shim" Sep 10 00:52:43.165082 env[1203]: time="2025-09-10T00:52:43.165048351Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:52:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3704 runtime=io.containerd.runc.v2\n" Sep 10 00:52:43.165435 env[1203]: time="2025-09-10T00:52:43.165410899Z" level=info msg="TearDown network for sandbox \"4d7f82f057bfc7bbbeb15eb976789b7708c23b26c3042be041345d37629b5b1b\" successfully" Sep 10 00:52:43.165540 env[1203]: time="2025-09-10T00:52:43.165518492Z" level=info msg="StopPodSandbox for \"4d7f82f057bfc7bbbeb15eb976789b7708c23b26c3042be041345d37629b5b1b\" returns successfully" Sep 10 00:52:43.293150 kubelet[1946]: I0910 00:52:43.293103 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4864cb9b-04e2-4260-b12c-2f6c967369f1-clustermesh-secrets\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.293150 kubelet[1946]: I0910 00:52:43.293140 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-bpf-maps\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.293150 kubelet[1946]: I0910 00:52:43.293159 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k65lr\" (UniqueName: \"kubernetes.io/projected/4864cb9b-04e2-4260-b12c-2f6c967369f1-kube-api-access-k65lr\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.293543 kubelet[1946]: I0910 00:52:43.293174 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-hostproc\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.293543 kubelet[1946]: I0910 00:52:43.293190 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csknd\" (UniqueName: \"kubernetes.io/projected/3a04883d-8394-4dff-b7dd-76131a95da99-kube-api-access-csknd\") pod \"3a04883d-8394-4dff-b7dd-76131a95da99\" (UID: \"3a04883d-8394-4dff-b7dd-76131a95da99\") " Sep 10 00:52:43.293543 kubelet[1946]: I0910 00:52:43.293206 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4864cb9b-04e2-4260-b12c-2f6c967369f1-hubble-tls\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.293543 kubelet[1946]: I0910 00:52:43.293219 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-cni-path\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.293543 kubelet[1946]: I0910 00:52:43.293233 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-host-proc-sys-kernel\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.293543 kubelet[1946]: I0910 00:52:43.293245 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-cilium-cgroup\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.293726 kubelet[1946]: I0910 00:52:43.293257 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-host-proc-sys-net\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.293726 kubelet[1946]: I0910 00:52:43.293250 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:43.293726 kubelet[1946]: I0910 00:52:43.293274 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a04883d-8394-4dff-b7dd-76131a95da99-cilium-config-path\") pod \"3a04883d-8394-4dff-b7dd-76131a95da99\" (UID: \"3a04883d-8394-4dff-b7dd-76131a95da99\") " Sep 10 00:52:43.293726 kubelet[1946]: I0910 00:52:43.293290 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-etc-cni-netd\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.293726 kubelet[1946]: I0910 00:52:43.293302 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-cni-path" (OuterVolumeSpecName: "cni-path") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:43.293726 kubelet[1946]: I0910 00:52:43.293304 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-cilium-run\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.293973 kubelet[1946]: I0910 00:52:43.293328 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:43.293973 kubelet[1946]: I0910 00:52:43.293346 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-xtables-lock\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.293973 kubelet[1946]: I0910 00:52:43.293352 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:43.293973 kubelet[1946]: I0910 00:52:43.293362 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-lib-modules\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.293973 kubelet[1946]: I0910 00:52:43.293368 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:43.294095 kubelet[1946]: I0910 00:52:43.293382 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:43.294095 kubelet[1946]: I0910 00:52:43.293382 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4864cb9b-04e2-4260-b12c-2f6c967369f1-cilium-config-path\") pod \"4864cb9b-04e2-4260-b12c-2f6c967369f1\" (UID: \"4864cb9b-04e2-4260-b12c-2f6c967369f1\") " Sep 10 00:52:43.294095 kubelet[1946]: I0910 00:52:43.293418 1946 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.294095 kubelet[1946]: I0910 00:52:43.293428 1946 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.294095 kubelet[1946]: I0910 00:52:43.293436 1946 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.294095 kubelet[1946]: I0910 00:52:43.293444 1946 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.294095 kubelet[1946]: I0910 00:52:43.293452 1946 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.294309 kubelet[1946]: I0910 00:52:43.293459 1946 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.296136 kubelet[1946]: I0910 00:52:43.296101 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4864cb9b-04e2-4260-b12c-2f6c967369f1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:52:43.296240 kubelet[1946]: I0910 00:52:43.296140 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-hostproc" (OuterVolumeSpecName: "hostproc") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:43.296240 kubelet[1946]: I0910 00:52:43.296141 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a04883d-8394-4dff-b7dd-76131a95da99-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3a04883d-8394-4dff-b7dd-76131a95da99" (UID: "3a04883d-8394-4dff-b7dd-76131a95da99"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:52:43.296240 kubelet[1946]: I0910 00:52:43.296171 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:43.296240 kubelet[1946]: I0910 00:52:43.296187 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:43.296904 kubelet[1946]: I0910 00:52:43.296812 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4864cb9b-04e2-4260-b12c-2f6c967369f1-kube-api-access-k65lr" (OuterVolumeSpecName: "kube-api-access-k65lr") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "kube-api-access-k65lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:52:43.296904 kubelet[1946]: I0910 00:52:43.296860 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:43.297699 kubelet[1946]: I0910 00:52:43.297677 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4864cb9b-04e2-4260-b12c-2f6c967369f1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:52:43.299000 kubelet[1946]: I0910 00:52:43.298920 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a04883d-8394-4dff-b7dd-76131a95da99-kube-api-access-csknd" (OuterVolumeSpecName: "kube-api-access-csknd") pod "3a04883d-8394-4dff-b7dd-76131a95da99" (UID: "3a04883d-8394-4dff-b7dd-76131a95da99"). InnerVolumeSpecName "kube-api-access-csknd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:52:43.299152 kubelet[1946]: I0910 00:52:43.299116 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4864cb9b-04e2-4260-b12c-2f6c967369f1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4864cb9b-04e2-4260-b12c-2f6c967369f1" (UID: "4864cb9b-04e2-4260-b12c-2f6c967369f1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:52:43.394432 kubelet[1946]: I0910 00:52:43.394391 1946 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4864cb9b-04e2-4260-b12c-2f6c967369f1-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.394432 kubelet[1946]: I0910 00:52:43.394417 1946 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.394432 kubelet[1946]: I0910 00:52:43.394425 1946 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.394432 kubelet[1946]: I0910 00:52:43.394434 1946 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a04883d-8394-4dff-b7dd-76131a95da99-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.394563 kubelet[1946]: I0910 00:52:43.394446 1946 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.394563 kubelet[1946]: I0910 00:52:43.394453 1946 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4864cb9b-04e2-4260-b12c-2f6c967369f1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.394563 kubelet[1946]: I0910 00:52:43.394461 1946 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4864cb9b-04e2-4260-b12c-2f6c967369f1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.394563 kubelet[1946]: I0910 00:52:43.394469 1946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k65lr\" (UniqueName: \"kubernetes.io/projected/4864cb9b-04e2-4260-b12c-2f6c967369f1-kube-api-access-k65lr\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.394563 kubelet[1946]: I0910 00:52:43.394476 1946 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4864cb9b-04e2-4260-b12c-2f6c967369f1-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.394563 kubelet[1946]: I0910 00:52:43.394484 1946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csknd\" (UniqueName: \"kubernetes.io/projected/3a04883d-8394-4dff-b7dd-76131a95da99-kube-api-access-csknd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:43.640676 systemd[1]: Removed slice kubepods-burstable-pod4864cb9b_04e2_4260_b12c_2f6c967369f1.slice. Sep 10 00:52:43.640769 systemd[1]: kubepods-burstable-pod4864cb9b_04e2_4260_b12c_2f6c967369f1.slice: Consumed 6.305s CPU time. Sep 10 00:52:43.641724 systemd[1]: Removed slice kubepods-besteffort-pod3a04883d_8394_4dff_b7dd_76131a95da99.slice. Sep 10 00:52:43.811392 kubelet[1946]: I0910 00:52:43.811332 1946 scope.go:117] "RemoveContainer" containerID="6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33" Sep 10 00:52:43.813136 env[1203]: time="2025-09-10T00:52:43.813076184Z" level=info msg="RemoveContainer for \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\"" Sep 10 00:52:43.819909 env[1203]: time="2025-09-10T00:52:43.819870754Z" level=info msg="RemoveContainer for \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\" returns successfully" Sep 10 00:52:43.820879 kubelet[1946]: I0910 00:52:43.820104 1946 scope.go:117] "RemoveContainer" containerID="6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33" Sep 10 00:52:43.820879 kubelet[1946]: E0910 00:52:43.820492 1946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\": not found" containerID="6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33" Sep 10 00:52:43.820879 kubelet[1946]: I0910 00:52:43.820515 1946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33"} err="failed to get container status \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\": not found" Sep 10 00:52:43.820879 kubelet[1946]: I0910 00:52:43.820598 1946 scope.go:117] "RemoveContainer" containerID="4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468" Sep 10 00:52:43.821016 env[1203]: time="2025-09-10T00:52:43.820271865Z" level=error msg="ContainerStatus for \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c1a66b348bf9a2491552cd969a48b2a54c3639476e4f0c24ae1408748fede33\": not found" Sep 10 00:52:43.822544 env[1203]: time="2025-09-10T00:52:43.822480385Z" level=info msg="RemoveContainer for \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\"" Sep 10 00:52:43.826277 env[1203]: time="2025-09-10T00:52:43.826235801Z" level=info msg="RemoveContainer for \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\" returns successfully" Sep 10 00:52:43.826408 kubelet[1946]: I0910 00:52:43.826382 1946 scope.go:117] "RemoveContainer" containerID="597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf" Sep 10 00:52:43.827506 env[1203]: time="2025-09-10T00:52:43.827468690Z" level=info msg="RemoveContainer for \"597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf\"" Sep 10 00:52:43.830888 env[1203]: time="2025-09-10T00:52:43.830625279Z" level=info msg="RemoveContainer for \"597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf\" returns successfully" Sep 10 00:52:43.831058 kubelet[1946]: I0910 00:52:43.830997 1946 scope.go:117] "RemoveContainer" containerID="28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72" Sep 10 00:52:43.833374 env[1203]: time="2025-09-10T00:52:43.833344538Z" level=info msg="RemoveContainer for \"28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72\"" Sep 10 00:52:43.836184 env[1203]: time="2025-09-10T00:52:43.836150962Z" level=info msg="RemoveContainer for \"28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72\" returns successfully" Sep 10 00:52:43.836325 kubelet[1946]: I0910 00:52:43.836299 1946 scope.go:117] "RemoveContainer" containerID="a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4" Sep 10 00:52:43.837226 env[1203]: time="2025-09-10T00:52:43.837194111Z" level=info msg="RemoveContainer for \"a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4\"" Sep 10 00:52:43.839891 env[1203]: time="2025-09-10T00:52:43.839856663Z" level=info msg="RemoveContainer for \"a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4\" returns successfully" Sep 10 00:52:43.840039 kubelet[1946]: I0910 00:52:43.840018 1946 scope.go:117] "RemoveContainer" containerID="9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700" Sep 10 00:52:43.840927 env[1203]: time="2025-09-10T00:52:43.840894813Z" level=info msg="RemoveContainer for \"9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700\"" Sep 10 00:52:43.844622 env[1203]: time="2025-09-10T00:52:43.844521544Z" level=info msg="RemoveContainer for \"9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700\" returns successfully" Sep 10 00:52:43.844799 kubelet[1946]: I0910 00:52:43.844753 1946 scope.go:117] "RemoveContainer" containerID="4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468" Sep 10 00:52:43.845023 env[1203]: time="2025-09-10T00:52:43.844956479Z" level=error msg="ContainerStatus for \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\": not found" Sep 10 00:52:43.845184 kubelet[1946]: E0910 00:52:43.845159 1946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\": not found" containerID="4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468" Sep 10 00:52:43.845251 kubelet[1946]: I0910 00:52:43.845189 1946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468"} err="failed to get container status \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\": rpc error: code = NotFound desc = an error occurred when try to find container \"4698cdef12f03c23dacbbe594a02747f2e681333c6afe9e12915849ca740f468\": not found" Sep 10 00:52:43.845251 kubelet[1946]: I0910 00:52:43.845210 1946 scope.go:117] "RemoveContainer" containerID="597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf" Sep 10 00:52:43.845474 env[1203]: time="2025-09-10T00:52:43.845382938Z" level=error msg="ContainerStatus for \"597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf\": not found" Sep 10 00:52:43.845672 kubelet[1946]: E0910 00:52:43.845551 1946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf\": not found" containerID="597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf" Sep 10 00:52:43.845672 kubelet[1946]: I0910 00:52:43.845594 1946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf"} err="failed to get container status \"597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf\": rpc error: code = NotFound desc = an error occurred when try to find container \"597b1d81c54e8b987fcc12559814b8019dee80117e7c501f1a227b4ae07adcaf\": not found" Sep 10 00:52:43.845672 kubelet[1946]: I0910 00:52:43.845615 1946 scope.go:117] "RemoveContainer" containerID="28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72" Sep 10 00:52:43.845871 env[1203]: time="2025-09-10T00:52:43.845796552Z" level=error msg="ContainerStatus for \"28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72\": not found" Sep 10 00:52:43.845978 kubelet[1946]: E0910 00:52:43.845954 1946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72\": not found" containerID="28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72" Sep 10 00:52:43.846042 kubelet[1946]: I0910 00:52:43.845976 1946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72"} err="failed to get container status \"28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72\": rpc error: code = NotFound desc = an error occurred when try to find container \"28022d38260d80a52e1f87d2f5c38b5fd5672c15541b22df8776409ad9ae9e72\": not found" Sep 10 00:52:43.846042 kubelet[1946]: I0910 00:52:43.845990 1946 scope.go:117] "RemoveContainer" containerID="a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4" Sep 10 00:52:43.846185 env[1203]: time="2025-09-10T00:52:43.846138952Z" level=error msg="ContainerStatus for \"a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4\": not found" Sep 10 00:52:43.846303 kubelet[1946]: E0910 00:52:43.846278 1946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4\": not found" containerID="a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4" Sep 10 00:52:43.846358 kubelet[1946]: I0910 00:52:43.846308 1946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4"} err="failed to get container status \"a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7f994f2e0a0d48320a63ab497bd75ead13110f835bc95b9263b7096a9bce9b4\": not found" Sep 10 00:52:43.846358 kubelet[1946]: I0910 00:52:43.846329 1946 scope.go:117] "RemoveContainer" containerID="9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700" Sep 10 00:52:43.846561 env[1203]: time="2025-09-10T00:52:43.846499306Z" level=error msg="ContainerStatus for \"9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700\": not found" Sep 10 00:52:43.846690 kubelet[1946]: E0910 00:52:43.846667 1946 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700\": not found" containerID="9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700" Sep 10 00:52:43.846762 kubelet[1946]: I0910 00:52:43.846689 1946 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700"} err="failed to get container status \"9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bf5e055bea6f493fecbeb4a22eaccbe7ef06b3f4b231b97deb34d6bdd46b700\": not found" Sep 10 00:52:43.878450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d7f82f057bfc7bbbeb15eb976789b7708c23b26c3042be041345d37629b5b1b-rootfs.mount: Deactivated successfully. Sep 10 00:52:43.878549 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d7f82f057bfc7bbbeb15eb976789b7708c23b26c3042be041345d37629b5b1b-shm.mount: Deactivated successfully. Sep 10 00:52:43.878623 systemd[1]: var-lib-kubelet-pods-4864cb9b\x2d04e2\x2d4260\x2db12c\x2d2f6c967369f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk65lr.mount: Deactivated successfully. Sep 10 00:52:43.878685 systemd[1]: var-lib-kubelet-pods-3a04883d\x2d8394\x2d4dff\x2db7dd\x2d76131a95da99-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcsknd.mount: Deactivated successfully. Sep 10 00:52:43.878751 systemd[1]: var-lib-kubelet-pods-4864cb9b\x2d04e2\x2d4260\x2db12c\x2d2f6c967369f1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 00:52:43.878808 systemd[1]: var-lib-kubelet-pods-4864cb9b\x2d04e2\x2d4260\x2db12c\x2d2f6c967369f1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 00:52:44.769728 sshd[3562]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:44.772986 systemd[1]: sshd@24-10.0.0.139:22-10.0.0.1:48078.service: Deactivated successfully. Sep 10 00:52:44.773536 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:52:44.774234 systemd-logind[1186]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:52:44.775472 systemd[1]: Started sshd@25-10.0.0.139:22-10.0.0.1:48092.service. Sep 10 00:52:44.776249 systemd-logind[1186]: Removed session 25. Sep 10 00:52:44.817384 sshd[3722]: Accepted publickey for core from 10.0.0.1 port 48092 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:44.818749 sshd[3722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:44.822324 systemd-logind[1186]: New session 26 of user core. Sep 10 00:52:44.823116 systemd[1]: Started session-26.scope. Sep 10 00:52:45.290551 sshd[3722]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:45.294719 systemd[1]: Started sshd@26-10.0.0.139:22-10.0.0.1:48108.service. Sep 10 00:52:45.295240 systemd[1]: sshd@25-10.0.0.139:22-10.0.0.1:48092.service: Deactivated successfully. Sep 10 00:52:45.296164 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 00:52:45.299916 systemd-logind[1186]: Session 26 logged out. Waiting for processes to exit. Sep 10 00:52:45.301045 systemd-logind[1186]: Removed session 26. Sep 10 00:52:45.305371 kubelet[1946]: E0910 00:52:45.305326 1946 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4864cb9b-04e2-4260-b12c-2f6c967369f1" containerName="mount-cgroup" Sep 10 00:52:45.305371 kubelet[1946]: E0910 00:52:45.305354 1946 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a04883d-8394-4dff-b7dd-76131a95da99" containerName="cilium-operator" Sep 10 00:52:45.305371 kubelet[1946]: E0910 00:52:45.305360 1946 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4864cb9b-04e2-4260-b12c-2f6c967369f1" containerName="clean-cilium-state" Sep 10 00:52:45.305371 kubelet[1946]: E0910 00:52:45.305365 1946 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4864cb9b-04e2-4260-b12c-2f6c967369f1" containerName="cilium-agent" Sep 10 00:52:45.305371 kubelet[1946]: E0910 00:52:45.305371 1946 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4864cb9b-04e2-4260-b12c-2f6c967369f1" containerName="apply-sysctl-overwrites" Sep 10 00:52:45.305371 kubelet[1946]: E0910 00:52:45.305376 1946 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4864cb9b-04e2-4260-b12c-2f6c967369f1" containerName="mount-bpf-fs" Sep 10 00:52:45.305867 kubelet[1946]: I0910 00:52:45.305398 1946 memory_manager.go:354] "RemoveStaleState removing state" podUID="4864cb9b-04e2-4260-b12c-2f6c967369f1" containerName="cilium-agent" Sep 10 00:52:45.305867 kubelet[1946]: I0910 00:52:45.305405 1946 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a04883d-8394-4dff-b7dd-76131a95da99" containerName="cilium-operator" Sep 10 00:52:45.311004 systemd[1]: Created slice kubepods-burstable-pod1ccf1ba3_c01a_4418_8bb2_1c5bf4cfdf3d.slice. Sep 10 00:52:45.347263 sshd[3733]: Accepted publickey for core from 10.0.0.1 port 48108 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:45.348994 sshd[3733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:45.353751 systemd[1]: Started session-27.scope. Sep 10 00:52:45.355217 systemd-logind[1186]: New session 27 of user core. Sep 10 00:52:45.405591 kubelet[1946]: I0910 00:52:45.405517 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-lib-modules\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.405591 kubelet[1946]: I0910 00:52:45.405561 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-xtables-lock\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.405802 kubelet[1946]: I0910 00:52:45.405600 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-host-proc-sys-net\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.405802 kubelet[1946]: I0910 00:52:45.405619 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-config-path\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.405802 kubelet[1946]: I0910 00:52:45.405635 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-bpf-maps\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.405802 kubelet[1946]: I0910 00:52:45.405746 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-ipsec-secrets\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.405928 kubelet[1946]: I0910 00:52:45.405811 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cni-path\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.405928 kubelet[1946]: I0910 00:52:45.405877 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-host-proc-sys-kernel\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.405928 kubelet[1946]: I0910 00:52:45.405907 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-hostproc\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.405928 kubelet[1946]: I0910 00:52:45.405925 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-cgroup\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.406037 kubelet[1946]: I0910 00:52:45.405953 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-etc-cni-netd\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.406037 kubelet[1946]: I0910 00:52:45.405990 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-hubble-tls\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.406037 kubelet[1946]: I0910 00:52:45.406017 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-run\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.406037 kubelet[1946]: I0910 00:52:45.406034 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-clustermesh-secrets\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.406140 kubelet[1946]: I0910 00:52:45.406047 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb8qz\" (UniqueName: \"kubernetes.io/projected/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-kube-api-access-nb8qz\") pod \"cilium-fjnvg\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " pod="kube-system/cilium-fjnvg" Sep 10 00:52:45.475170 sshd[3733]: pam_unix(sshd:session): session closed for user core Sep 10 00:52:45.478226 systemd[1]: sshd@26-10.0.0.139:22-10.0.0.1:48108.service: Deactivated successfully. Sep 10 00:52:45.478820 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 00:52:45.481675 systemd[1]: Started sshd@27-10.0.0.139:22-10.0.0.1:48118.service. Sep 10 00:52:45.482893 systemd-logind[1186]: Session 27 logged out. Waiting for processes to exit. Sep 10 00:52:45.483688 kubelet[1946]: E0910 00:52:45.483638 1946 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-nb8qz lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-fjnvg" podUID="1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" Sep 10 00:52:45.484104 systemd-logind[1186]: Removed session 27. Sep 10 00:52:45.525607 sshd[3748]: Accepted publickey for core from 10.0.0.1 port 48118 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:52:45.526893 sshd[3748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:52:45.530292 systemd-logind[1186]: New session 28 of user core. Sep 10 00:52:45.531210 systemd[1]: Started session-28.scope. Sep 10 00:52:45.635616 kubelet[1946]: I0910 00:52:45.635469 1946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a04883d-8394-4dff-b7dd-76131a95da99" path="/var/lib/kubelet/pods/3a04883d-8394-4dff-b7dd-76131a95da99/volumes" Sep 10 00:52:45.635937 kubelet[1946]: I0910 00:52:45.635894 1946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4864cb9b-04e2-4260-b12c-2f6c967369f1" path="/var/lib/kubelet/pods/4864cb9b-04e2-4260-b12c-2f6c967369f1/volumes" Sep 10 00:52:45.681321 kubelet[1946]: E0910 00:52:45.681280 1946 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 00:52:46.010365 kubelet[1946]: I0910 00:52:46.010322 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-bpf-maps\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.010365 kubelet[1946]: I0910 00:52:46.010360 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-config-path\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.010548 kubelet[1946]: I0910 00:52:46.010396 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-cgroup\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.010548 kubelet[1946]: I0910 00:52:46.010415 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb8qz\" (UniqueName: \"kubernetes.io/projected/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-kube-api-access-nb8qz\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.010548 kubelet[1946]: I0910 00:52:46.010419 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:46.010548 kubelet[1946]: I0910 00:52:46.010470 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:46.010548 kubelet[1946]: I0910 00:52:46.010436 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-clustermesh-secrets\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.010698 kubelet[1946]: I0910 00:52:46.010512 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-host-proc-sys-net\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.010698 kubelet[1946]: I0910 00:52:46.010529 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-ipsec-secrets\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.010698 kubelet[1946]: I0910 00:52:46.010568 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:46.010933 kubelet[1946]: I0910 00:52:46.010891 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-hostproc\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.010933 kubelet[1946]: I0910 00:52:46.010915 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-run\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.010933 kubelet[1946]: I0910 00:52:46.010930 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-xtables-lock\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.010933 kubelet[1946]: I0910 00:52:46.010944 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-etc-cni-netd\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.011180 kubelet[1946]: I0910 00:52:46.010959 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-hubble-tls\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.011180 kubelet[1946]: I0910 00:52:46.010975 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-lib-modules\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.011180 kubelet[1946]: I0910 00:52:46.010987 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cni-path\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.011180 kubelet[1946]: I0910 00:52:46.011002 1946 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-host-proc-sys-kernel\") pod \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\" (UID: \"1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d\") " Sep 10 00:52:46.011180 kubelet[1946]: I0910 00:52:46.011027 1946 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.011180 kubelet[1946]: I0910 00:52:46.011035 1946 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.011180 kubelet[1946]: I0910 00:52:46.011045 1946 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.011343 kubelet[1946]: I0910 00:52:46.011064 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:46.012100 kubelet[1946]: I0910 00:52:46.012057 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:52:46.012197 kubelet[1946]: I0910 00:52:46.012115 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:46.012197 kubelet[1946]: I0910 00:52:46.012137 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-hostproc" (OuterVolumeSpecName: "hostproc") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:46.012197 kubelet[1946]: I0910 00:52:46.012151 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:46.012197 kubelet[1946]: I0910 00:52:46.012165 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:46.012197 kubelet[1946]: I0910 00:52:46.012179 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:46.014534 systemd[1]: var-lib-kubelet-pods-1ccf1ba3\x2dc01a\x2d4418\x2d8bb2\x2d1c5bf4cfdf3d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnb8qz.mount: Deactivated successfully. Sep 10 00:52:46.015774 kubelet[1946]: I0910 00:52:46.014696 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:52:46.015774 kubelet[1946]: I0910 00:52:46.014738 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cni-path" (OuterVolumeSpecName: "cni-path") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:52:46.015774 kubelet[1946]: I0910 00:52:46.015454 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:52:46.015774 kubelet[1946]: I0910 00:52:46.015542 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-kube-api-access-nb8qz" (OuterVolumeSpecName: "kube-api-access-nb8qz") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "kube-api-access-nb8qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:52:46.015774 kubelet[1946]: I0910 00:52:46.015629 1946 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" (UID: "1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:52:46.016430 systemd[1]: var-lib-kubelet-pods-1ccf1ba3\x2dc01a\x2d4418\x2d8bb2\x2d1c5bf4cfdf3d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 10 00:52:46.016509 systemd[1]: var-lib-kubelet-pods-1ccf1ba3\x2dc01a\x2d4418\x2d8bb2\x2d1c5bf4cfdf3d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 00:52:46.016562 systemd[1]: var-lib-kubelet-pods-1ccf1ba3\x2dc01a\x2d4418\x2d8bb2\x2d1c5bf4cfdf3d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 00:52:46.111174 kubelet[1946]: I0910 00:52:46.111128 1946 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.111174 kubelet[1946]: I0910 00:52:46.111150 1946 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.111174 kubelet[1946]: I0910 00:52:46.111159 1946 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.111174 kubelet[1946]: I0910 00:52:46.111166 1946 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.111174 kubelet[1946]: I0910 00:52:46.111174 1946 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.111174 kubelet[1946]: I0910 00:52:46.111181 1946 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.111174 kubelet[1946]: I0910 00:52:46.111189 1946 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.111507 kubelet[1946]: I0910 00:52:46.111197 1946 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.111507 kubelet[1946]: I0910 00:52:46.111207 1946 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.111507 kubelet[1946]: I0910 00:52:46.111216 1946 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.111507 kubelet[1946]: I0910 00:52:46.111224 1946 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb8qz\" (UniqueName: \"kubernetes.io/projected/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-kube-api-access-nb8qz\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.111507 kubelet[1946]: I0910 00:52:46.111232 1946 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:52:46.828827 systemd[1]: Removed slice kubepods-burstable-pod1ccf1ba3_c01a_4418_8bb2_1c5bf4cfdf3d.slice. Sep 10 00:52:46.862259 kubelet[1946]: W0910 00:52:46.862211 1946 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 10 00:52:46.862259 kubelet[1946]: E0910 00:52:46.862256 1946 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 10 00:52:46.862708 kubelet[1946]: W0910 00:52:46.862600 1946 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 10 00:52:46.862708 kubelet[1946]: E0910 00:52:46.862616 1946 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 10 00:52:46.862708 kubelet[1946]: W0910 00:52:46.862654 1946 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 10 00:52:46.862708 kubelet[1946]: E0910 00:52:46.862663 1946 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 10 00:52:46.862708 kubelet[1946]: W0910 00:52:46.862694 1946 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 10 00:52:46.862958 kubelet[1946]: E0910 00:52:46.862702 1946 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 10 00:52:46.864981 systemd[1]: Created slice kubepods-burstable-podb562ab1c_572b_4c56_aefa_51f7cf20f388.slice. Sep 10 00:52:47.016667 kubelet[1946]: I0910 00:52:47.016602 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b562ab1c-572b-4c56-aefa-51f7cf20f388-cilium-cgroup\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017034 kubelet[1946]: I0910 00:52:47.016707 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b562ab1c-572b-4c56-aefa-51f7cf20f388-xtables-lock\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017034 kubelet[1946]: I0910 00:52:47.016739 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b562ab1c-572b-4c56-aefa-51f7cf20f388-host-proc-sys-net\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017034 kubelet[1946]: I0910 00:52:47.016784 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b562ab1c-572b-4c56-aefa-51f7cf20f388-cni-path\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017034 kubelet[1946]: I0910 00:52:47.016801 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b562ab1c-572b-4c56-aefa-51f7cf20f388-lib-modules\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017034 kubelet[1946]: I0910 00:52:47.016815 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b562ab1c-572b-4c56-aefa-51f7cf20f388-clustermesh-secrets\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017034 kubelet[1946]: I0910 00:52:47.016829 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b562ab1c-572b-4c56-aefa-51f7cf20f388-hubble-tls\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017190 kubelet[1946]: I0910 00:52:47.016870 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b562ab1c-572b-4c56-aefa-51f7cf20f388-hostproc\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017190 kubelet[1946]: I0910 00:52:47.016914 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b562ab1c-572b-4c56-aefa-51f7cf20f388-cilium-run\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017190 kubelet[1946]: I0910 00:52:47.016929 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b562ab1c-572b-4c56-aefa-51f7cf20f388-bpf-maps\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017190 kubelet[1946]: I0910 00:52:47.016943 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b562ab1c-572b-4c56-aefa-51f7cf20f388-etc-cni-netd\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017190 kubelet[1946]: I0910 00:52:47.016958 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b562ab1c-572b-4c56-aefa-51f7cf20f388-cilium-config-path\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017190 kubelet[1946]: I0910 00:52:47.016974 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b562ab1c-572b-4c56-aefa-51f7cf20f388-cilium-ipsec-secrets\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017325 kubelet[1946]: I0910 00:52:47.017042 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j48q\" (UniqueName: \"kubernetes.io/projected/b562ab1c-572b-4c56-aefa-51f7cf20f388-kube-api-access-5j48q\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.017325 kubelet[1946]: I0910 00:52:47.017089 1946 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b562ab1c-572b-4c56-aefa-51f7cf20f388-host-proc-sys-kernel\") pod \"cilium-7mxsg\" (UID: \"b562ab1c-572b-4c56-aefa-51f7cf20f388\") " pod="kube-system/cilium-7mxsg" Sep 10 00:52:47.618603 kubelet[1946]: I0910 00:52:47.618521 1946 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-10T00:52:47Z","lastTransitionTime":"2025-09-10T00:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 10 00:52:47.639084 kubelet[1946]: I0910 00:52:47.639034 1946 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d" path="/var/lib/kubelet/pods/1ccf1ba3-c01a-4418-8bb2-1c5bf4cfdf3d/volumes" Sep 10 00:52:48.119318 kubelet[1946]: E0910 00:52:48.119254 1946 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 10 00:52:48.119318 kubelet[1946]: E0910 00:52:48.119315 1946 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-7mxsg: failed to sync secret cache: timed out waiting for the condition Sep 10 00:52:48.119867 kubelet[1946]: E0910 00:52:48.119409 1946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b562ab1c-572b-4c56-aefa-51f7cf20f388-hubble-tls podName:b562ab1c-572b-4c56-aefa-51f7cf20f388 nodeName:}" failed. No retries permitted until 2025-09-10 00:52:48.619379765 +0000 UTC m=+83.081134546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/b562ab1c-572b-4c56-aefa-51f7cf20f388-hubble-tls") pod "cilium-7mxsg" (UID: "b562ab1c-572b-4c56-aefa-51f7cf20f388") : failed to sync secret cache: timed out waiting for the condition Sep 10 00:52:48.119867 kubelet[1946]: E0910 00:52:48.119261 1946 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 10 00:52:48.119867 kubelet[1946]: E0910 00:52:48.119679 1946 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b562ab1c-572b-4c56-aefa-51f7cf20f388-clustermesh-secrets podName:b562ab1c-572b-4c56-aefa-51f7cf20f388 nodeName:}" failed. No retries permitted until 2025-09-10 00:52:48.619668403 +0000 UTC m=+83.081423184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/b562ab1c-572b-4c56-aefa-51f7cf20f388-clustermesh-secrets") pod "cilium-7mxsg" (UID: "b562ab1c-572b-4c56-aefa-51f7cf20f388") : failed to sync secret cache: timed out waiting for the condition Sep 10 00:52:48.634695 kubelet[1946]: E0910 00:52:48.634288 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:48.667825 kubelet[1946]: E0910 00:52:48.667792 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:48.668323 env[1203]: time="2025-09-10T00:52:48.668288616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7mxsg,Uid:b562ab1c-572b-4c56-aefa-51f7cf20f388,Namespace:kube-system,Attempt:0,}" Sep 10 00:52:48.681401 env[1203]: time="2025-09-10T00:52:48.681319181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:52:48.681523 env[1203]: time="2025-09-10T00:52:48.681389965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:52:48.681523 env[1203]: time="2025-09-10T00:52:48.681497149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:52:48.681830 env[1203]: time="2025-09-10T00:52:48.681763935Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d518c8eb6f855ce20d7950b2c73279702d333b5c5b5deb57b3afc60c50674044 pid=3778 runtime=io.containerd.runc.v2 Sep 10 00:52:48.692742 systemd[1]: Started cri-containerd-d518c8eb6f855ce20d7950b2c73279702d333b5c5b5deb57b3afc60c50674044.scope. Sep 10 00:52:48.712220 env[1203]: time="2025-09-10T00:52:48.712175764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7mxsg,Uid:b562ab1c-572b-4c56-aefa-51f7cf20f388,Namespace:kube-system,Attempt:0,} returns sandbox id \"d518c8eb6f855ce20d7950b2c73279702d333b5c5b5deb57b3afc60c50674044\"" Sep 10 00:52:48.713498 kubelet[1946]: E0910 00:52:48.712979 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:48.714820 env[1203]: time="2025-09-10T00:52:48.714797322Z" level=info msg="CreateContainer within sandbox \"d518c8eb6f855ce20d7950b2c73279702d333b5c5b5deb57b3afc60c50674044\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:52:48.727262 env[1203]: time="2025-09-10T00:52:48.727216125Z" level=info msg="CreateContainer within sandbox \"d518c8eb6f855ce20d7950b2c73279702d333b5c5b5deb57b3afc60c50674044\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b34ea2cc004fe24bf75426b3c638a26c986967a3873b2e13fc4dc67eaafe6abe\"" Sep 10 00:52:48.727663 env[1203]: time="2025-09-10T00:52:48.727614732Z" level=info msg="StartContainer for \"b34ea2cc004fe24bf75426b3c638a26c986967a3873b2e13fc4dc67eaafe6abe\"" Sep 10 00:52:48.741981 systemd[1]: Started cri-containerd-b34ea2cc004fe24bf75426b3c638a26c986967a3873b2e13fc4dc67eaafe6abe.scope. Sep 10 00:52:48.766616 env[1203]: time="2025-09-10T00:52:48.763745097Z" level=info msg="StartContainer for \"b34ea2cc004fe24bf75426b3c638a26c986967a3873b2e13fc4dc67eaafe6abe\" returns successfully" Sep 10 00:52:48.771653 systemd[1]: cri-containerd-b34ea2cc004fe24bf75426b3c638a26c986967a3873b2e13fc4dc67eaafe6abe.scope: Deactivated successfully. Sep 10 00:52:48.799419 env[1203]: time="2025-09-10T00:52:48.799364511Z" level=info msg="shim disconnected" id=b34ea2cc004fe24bf75426b3c638a26c986967a3873b2e13fc4dc67eaafe6abe Sep 10 00:52:48.799419 env[1203]: time="2025-09-10T00:52:48.799415067Z" level=warning msg="cleaning up after shim disconnected" id=b34ea2cc004fe24bf75426b3c638a26c986967a3873b2e13fc4dc67eaafe6abe namespace=k8s.io Sep 10 00:52:48.799419 env[1203]: time="2025-09-10T00:52:48.799423474Z" level=info msg="cleaning up dead shim" Sep 10 00:52:48.811810 env[1203]: time="2025-09-10T00:52:48.811769159Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:52:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3863 runtime=io.containerd.runc.v2\n" Sep 10 00:52:48.831202 kubelet[1946]: E0910 00:52:48.831041 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:48.833830 env[1203]: time="2025-09-10T00:52:48.833769902Z" level=info msg="CreateContainer within sandbox \"d518c8eb6f855ce20d7950b2c73279702d333b5c5b5deb57b3afc60c50674044\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:52:48.845840 env[1203]: time="2025-09-10T00:52:48.845767317Z" level=info msg="CreateContainer within sandbox \"d518c8eb6f855ce20d7950b2c73279702d333b5c5b5deb57b3afc60c50674044\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a272d17e5bbdf289f075d3465617ea31deefacd8330ba4c23c11c449688d555\"" Sep 10 00:52:48.846458 env[1203]: time="2025-09-10T00:52:48.846411860Z" level=info msg="StartContainer for \"2a272d17e5bbdf289f075d3465617ea31deefacd8330ba4c23c11c449688d555\"" Sep 10 00:52:48.860632 systemd[1]: Started cri-containerd-2a272d17e5bbdf289f075d3465617ea31deefacd8330ba4c23c11c449688d555.scope. Sep 10 00:52:48.888607 env[1203]: time="2025-09-10T00:52:48.888473003Z" level=info msg="StartContainer for \"2a272d17e5bbdf289f075d3465617ea31deefacd8330ba4c23c11c449688d555\" returns successfully" Sep 10 00:52:48.894834 systemd[1]: cri-containerd-2a272d17e5bbdf289f075d3465617ea31deefacd8330ba4c23c11c449688d555.scope: Deactivated successfully. Sep 10 00:52:48.915646 env[1203]: time="2025-09-10T00:52:48.915570110Z" level=info msg="shim disconnected" id=2a272d17e5bbdf289f075d3465617ea31deefacd8330ba4c23c11c449688d555 Sep 10 00:52:48.915646 env[1203]: time="2025-09-10T00:52:48.915638129Z" level=warning msg="cleaning up after shim disconnected" id=2a272d17e5bbdf289f075d3465617ea31deefacd8330ba4c23c11c449688d555 namespace=k8s.io Sep 10 00:52:48.915646 env[1203]: time="2025-09-10T00:52:48.915646675Z" level=info msg="cleaning up dead shim" Sep 10 00:52:48.921666 env[1203]: time="2025-09-10T00:52:48.921633109Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:52:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3927 runtime=io.containerd.runc.v2\n" Sep 10 00:52:49.834368 kubelet[1946]: E0910 00:52:49.834326 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:49.837208 env[1203]: time="2025-09-10T00:52:49.837162863Z" level=info msg="CreateContainer within sandbox \"d518c8eb6f855ce20d7950b2c73279702d333b5c5b5deb57b3afc60c50674044\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:52:49.851555 env[1203]: time="2025-09-10T00:52:49.851504290Z" level=info msg="CreateContainer within sandbox \"d518c8eb6f855ce20d7950b2c73279702d333b5c5b5deb57b3afc60c50674044\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2bca8116c3832762d2221d28b88dd001175a65c3cfe241a81a829c7148778421\"" Sep 10 00:52:49.852029 env[1203]: time="2025-09-10T00:52:49.851970074Z" level=info msg="StartContainer for \"2bca8116c3832762d2221d28b88dd001175a65c3cfe241a81a829c7148778421\"" Sep 10 00:52:49.873682 systemd[1]: Started cri-containerd-2bca8116c3832762d2221d28b88dd001175a65c3cfe241a81a829c7148778421.scope. Sep 10 00:52:49.900416 env[1203]: time="2025-09-10T00:52:49.900351654Z" level=info msg="StartContainer for \"2bca8116c3832762d2221d28b88dd001175a65c3cfe241a81a829c7148778421\" returns successfully" Sep 10 00:52:49.902379 systemd[1]: cri-containerd-2bca8116c3832762d2221d28b88dd001175a65c3cfe241a81a829c7148778421.scope: Deactivated successfully. Sep 10 00:52:49.923210 env[1203]: time="2025-09-10T00:52:49.923169672Z" level=info msg="shim disconnected" id=2bca8116c3832762d2221d28b88dd001175a65c3cfe241a81a829c7148778421 Sep 10 00:52:49.923416 env[1203]: time="2025-09-10T00:52:49.923373229Z" level=warning msg="cleaning up after shim disconnected" id=2bca8116c3832762d2221d28b88dd001175a65c3cfe241a81a829c7148778421 namespace=k8s.io Sep 10 00:52:49.923416 env[1203]: time="2025-09-10T00:52:49.923395581Z" level=info msg="cleaning up dead shim" Sep 10 00:52:49.929543 env[1203]: time="2025-09-10T00:52:49.929518946Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:52:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3984 runtime=io.containerd.runc.v2\n" Sep 10 00:52:50.632028 systemd[1]: run-containerd-runc-k8s.io-2bca8116c3832762d2221d28b88dd001175a65c3cfe241a81a829c7148778421-runc.Nqy3v3.mount: Deactivated successfully. Sep 10 00:52:50.632124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bca8116c3832762d2221d28b88dd001175a65c3cfe241a81a829c7148778421-rootfs.mount: Deactivated successfully. Sep 10 00:52:50.682550 kubelet[1946]: E0910 00:52:50.682503 1946 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 00:52:50.837381 kubelet[1946]: E0910 00:52:50.837352 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:50.839002 env[1203]: time="2025-09-10T00:52:50.838957971Z" level=info msg="CreateContainer within sandbox \"d518c8eb6f855ce20d7950b2c73279702d333b5c5b5deb57b3afc60c50674044\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:52:50.880678 env[1203]: time="2025-09-10T00:52:50.880610994Z" level=info msg="CreateContainer within sandbox \"d518c8eb6f855ce20d7950b2c73279702d333b5c5b5deb57b3afc60c50674044\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8742837bcb590f33ccf9d0e6d893511f3526bcf07d32ed233da64da7e1735312\"" Sep 10 00:52:50.882442 env[1203]: time="2025-09-10T00:52:50.882293137Z" level=info msg="StartContainer for \"8742837bcb590f33ccf9d0e6d893511f3526bcf07d32ed233da64da7e1735312\"" Sep 10 00:52:50.900593 systemd[1]: Started cri-containerd-8742837bcb590f33ccf9d0e6d893511f3526bcf07d32ed233da64da7e1735312.scope. Sep 10 00:52:50.921955 systemd[1]: cri-containerd-8742837bcb590f33ccf9d0e6d893511f3526bcf07d32ed233da64da7e1735312.scope: Deactivated successfully. Sep 10 00:52:50.924888 env[1203]: time="2025-09-10T00:52:50.924853704Z" level=info msg="StartContainer for \"8742837bcb590f33ccf9d0e6d893511f3526bcf07d32ed233da64da7e1735312\" returns successfully" Sep 10 00:52:50.945963 env[1203]: time="2025-09-10T00:52:50.945912296Z" level=info msg="shim disconnected" id=8742837bcb590f33ccf9d0e6d893511f3526bcf07d32ed233da64da7e1735312 Sep 10 00:52:50.945963 env[1203]: time="2025-09-10T00:52:50.945956250Z" level=warning msg="cleaning up after shim disconnected" id=8742837bcb590f33ccf9d0e6d893511f3526bcf07d32ed233da64da7e1735312 namespace=k8s.io Sep 10 00:52:50.945963 env[1203]: time="2025-09-10T00:52:50.945964194Z" level=info msg="cleaning up dead shim" Sep 10 00:52:50.952205 env[1203]: time="2025-09-10T00:52:50.952144128Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:52:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4038 runtime=io.containerd.runc.v2\n" Sep 10 00:52:51.632125 systemd[1]: run-containerd-runc-k8s.io-8742837bcb590f33ccf9d0e6d893511f3526bcf07d32ed233da64da7e1735312-runc.eKbvyH.mount: Deactivated successfully. Sep 10 00:52:51.632223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8742837bcb590f33ccf9d0e6d893511f3526bcf07d32ed233da64da7e1735312-rootfs.mount: Deactivated successfully. Sep 10 00:52:51.841157 kubelet[1946]: E0910 00:52:51.841123 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:51.842516 env[1203]: time="2025-09-10T00:52:51.842472094Z" level=info msg="CreateContainer within sandbox \"d518c8eb6f855ce20d7950b2c73279702d333b5c5b5deb57b3afc60c50674044\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:52:51.877139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293050602.mount: Deactivated successfully. Sep 10 00:52:51.881500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445460646.mount: Deactivated successfully. Sep 10 00:52:51.885179 env[1203]: time="2025-09-10T00:52:51.885085398Z" level=info msg="CreateContainer within sandbox \"d518c8eb6f855ce20d7950b2c73279702d333b5c5b5deb57b3afc60c50674044\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fc32e44da1fc47b6d90adb28da9f23a396241cf0ae66d72a923a7ab58267fbbc\"" Sep 10 00:52:51.885612 env[1203]: time="2025-09-10T00:52:51.885587802Z" level=info msg="StartContainer for \"fc32e44da1fc47b6d90adb28da9f23a396241cf0ae66d72a923a7ab58267fbbc\"" Sep 10 00:52:51.898101 systemd[1]: Started cri-containerd-fc32e44da1fc47b6d90adb28da9f23a396241cf0ae66d72a923a7ab58267fbbc.scope. Sep 10 00:52:51.926137 env[1203]: time="2025-09-10T00:52:51.926076973Z" level=info msg="StartContainer for \"fc32e44da1fc47b6d90adb28da9f23a396241cf0ae66d72a923a7ab58267fbbc\" returns successfully" Sep 10 00:52:52.168602 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 10 00:52:52.846490 kubelet[1946]: E0910 00:52:52.846454 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:52.870449 kubelet[1946]: I0910 00:52:52.870383 1946 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7mxsg" podStartSLOduration=6.870364512 podStartE2EDuration="6.870364512s" podCreationTimestamp="2025-09-10 00:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:52:52.870123925 +0000 UTC m=+87.331878706" watchObservedRunningTime="2025-09-10 00:52:52.870364512 +0000 UTC m=+87.332119283" Sep 10 00:52:54.668500 kubelet[1946]: E0910 00:52:54.668455 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:54.766656 systemd-networkd[1033]: lxc_health: Link UP Sep 10 00:52:54.777620 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 10 00:52:54.780151 systemd-networkd[1033]: lxc_health: Gained carrier Sep 10 00:52:56.634050 kubelet[1946]: E0910 00:52:56.633993 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:56.669612 kubelet[1946]: E0910 00:52:56.669539 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:56.710701 systemd-networkd[1033]: lxc_health: Gained IPv6LL Sep 10 00:52:56.852791 kubelet[1946]: E0910 00:52:56.852747 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:57.633647 kubelet[1946]: E0910 00:52:57.633601 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:52:57.854571 kubelet[1946]: E0910 00:52:57.854528 1946 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:53:00.059920 sshd[3748]: pam_unix(sshd:session): session closed for user core Sep 10 00:53:00.062398 systemd[1]: sshd@27-10.0.0.139:22-10.0.0.1:48118.service: Deactivated successfully. Sep 10 00:53:00.063133 systemd[1]: session-28.scope: Deactivated successfully. Sep 10 00:53:00.063691 systemd-logind[1186]: Session 28 logged out. Waiting for processes to exit. Sep 10 00:53:00.064360 systemd-logind[1186]: Removed session 28.