Sep 10 00:45:56.923086 kernel: Linux version 5.15.191-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Sep 9 23:10:34 -00 2025 Sep 10 00:45:56.923113 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:45:56.923124 kernel: BIOS-provided physical RAM map: Sep 10 00:45:56.923131 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 10 00:45:56.923138 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 10 00:45:56.923146 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 10 00:45:56.923155 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 10 00:45:56.923163 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 10 00:45:56.923172 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 00:45:56.923180 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 10 00:45:56.923187 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 10 00:45:56.923195 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 10 00:45:56.923202 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 10 00:45:56.923210 kernel: NX (Execute Disable) protection: active Sep 10 00:45:56.923221 kernel: SMBIOS 2.8 present. Sep 10 00:45:56.923230 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 10 00:45:56.923237 kernel: Hypervisor detected: KVM Sep 10 00:45:56.923245 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 00:45:56.923258 kernel: kvm-clock: cpu 0, msr 6c19f001, primary cpu clock Sep 10 00:45:56.923266 kernel: kvm-clock: using sched offset of 3896017202 cycles Sep 10 00:45:56.923275 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 00:45:56.923283 kernel: tsc: Detected 2794.750 MHz processor Sep 10 00:45:56.923292 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 00:45:56.923310 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 00:45:56.923319 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 10 00:45:56.923327 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 00:45:56.923335 kernel: Using GB pages for direct mapping Sep 10 00:45:56.923344 kernel: ACPI: Early table checksum verification disabled Sep 10 00:45:56.923352 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 10 00:45:56.923360 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:45:56.923369 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:45:56.923377 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:45:56.923387 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 10 00:45:56.923396 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:45:56.923404 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:45:56.923412 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:45:56.923421 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:45:56.923429 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 10 00:45:56.923437 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 10 00:45:56.923446 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 10 00:45:56.923459 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 10 00:45:56.923468 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 10 00:45:56.923477 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 10 00:45:56.923486 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 10 00:45:56.923495 kernel: No NUMA configuration found Sep 10 00:45:56.923504 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 10 00:45:56.923515 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 10 00:45:56.923523 kernel: Zone ranges: Sep 10 00:45:56.923532 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 00:45:56.923541 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 10 00:45:56.923550 kernel: Normal empty Sep 10 00:45:56.923558 kernel: Movable zone start for each node Sep 10 00:45:56.923567 kernel: Early memory node ranges Sep 10 00:45:56.923576 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 10 00:45:56.923585 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 10 00:45:56.923596 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 10 00:45:56.923605 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:45:56.923614 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 10 00:45:56.923623 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 10 00:45:56.923631 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 00:45:56.923640 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 00:45:56.923649 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 00:45:56.923659 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 00:45:56.923667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 00:45:56.923676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 00:45:56.923692 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 00:45:56.923702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 00:45:56.923711 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 00:45:56.923723 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 10 00:45:56.923732 kernel: TSC deadline timer available Sep 10 00:45:56.923741 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 10 00:45:56.923749 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 10 00:45:56.923758 kernel: kvm-guest: setup PV sched yield Sep 10 00:45:56.923767 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 10 00:45:56.923778 kernel: Booting paravirtualized kernel on KVM Sep 10 00:45:56.923787 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 00:45:56.923796 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 10 00:45:56.923805 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 10 00:45:56.923814 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 10 00:45:56.923823 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 10 00:45:56.923832 kernel: kvm-guest: setup async PF for cpu 0 Sep 10 00:45:56.923841 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Sep 10 00:45:56.923849 kernel: kvm-guest: PV spinlocks enabled Sep 10 00:45:56.923860 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 00:45:56.923869 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 10 00:45:56.923878 kernel: Policy zone: DMA32 Sep 10 00:45:56.923889 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:45:56.923898 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:45:56.923907 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:45:56.923916 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:45:56.923925 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:45:56.923936 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 134796K reserved, 0K cma-reserved) Sep 10 00:45:56.923946 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:45:56.923954 kernel: ftrace: allocating 34612 entries in 136 pages Sep 10 00:45:56.923972 kernel: ftrace: allocated 136 pages with 2 groups Sep 10 00:45:56.923981 kernel: rcu: Hierarchical RCU implementation. Sep 10 00:45:56.923991 kernel: rcu: RCU event tracing is enabled. Sep 10 00:45:56.924000 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:45:56.924009 kernel: Rude variant of Tasks RCU enabled. Sep 10 00:45:56.924018 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:45:56.924029 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:45:56.924038 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:45:56.924047 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 10 00:45:56.924069 kernel: random: crng init done Sep 10 00:45:56.924078 kernel: Console: colour VGA+ 80x25 Sep 10 00:45:56.924087 kernel: printk: console [ttyS0] enabled Sep 10 00:45:56.924096 kernel: ACPI: Core revision 20210730 Sep 10 00:45:56.924105 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 10 00:45:56.924113 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 00:45:56.924124 kernel: x2apic enabled Sep 10 00:45:56.924133 kernel: Switched APIC routing to physical x2apic. Sep 10 00:45:56.924146 kernel: kvm-guest: setup PV IPIs Sep 10 00:45:56.924155 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 10 00:45:56.924164 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 10 00:45:56.924173 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 10 00:45:56.924183 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 00:45:56.924191 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 10 00:45:56.924201 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 10 00:45:56.924217 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 00:45:56.924226 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 00:45:56.924236 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 00:45:56.924247 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 10 00:45:56.924256 kernel: active return thunk: retbleed_return_thunk Sep 10 00:45:56.924265 kernel: RETBleed: Mitigation: untrained return thunk Sep 10 00:45:56.924275 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 00:45:56.924284 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 10 00:45:56.924294 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 00:45:56.924305 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 00:45:56.924314 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 00:45:56.924324 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 00:45:56.924335 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 10 00:45:56.924345 kernel: Freeing SMP alternatives memory: 32K Sep 10 00:45:56.924356 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:45:56.924366 kernel: LSM: Security Framework initializing Sep 10 00:45:56.924377 kernel: SELinux: Initializing. Sep 10 00:45:56.924386 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:45:56.924396 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:45:56.924405 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 10 00:45:56.924415 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 10 00:45:56.924424 kernel: ... version: 0 Sep 10 00:45:56.924433 kernel: ... bit width: 48 Sep 10 00:45:56.924442 kernel: ... generic registers: 6 Sep 10 00:45:56.924451 kernel: ... value mask: 0000ffffffffffff Sep 10 00:45:56.924462 kernel: ... max period: 00007fffffffffff Sep 10 00:45:56.924472 kernel: ... fixed-purpose events: 0 Sep 10 00:45:56.924481 kernel: ... event mask: 000000000000003f Sep 10 00:45:56.924490 kernel: signal: max sigframe size: 1776 Sep 10 00:45:56.924499 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:45:56.924509 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:45:56.924518 kernel: x86: Booting SMP configuration: Sep 10 00:45:56.924527 kernel: .... node #0, CPUs: #1 Sep 10 00:45:56.924537 kernel: kvm-clock: cpu 1, msr 6c19f041, secondary cpu clock Sep 10 00:45:56.924546 kernel: kvm-guest: setup async PF for cpu 1 Sep 10 00:45:56.924557 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Sep 10 00:45:56.924566 kernel: #2 Sep 10 00:45:56.924576 kernel: kvm-clock: cpu 2, msr 6c19f081, secondary cpu clock Sep 10 00:45:56.924585 kernel: kvm-guest: setup async PF for cpu 2 Sep 10 00:45:56.924594 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Sep 10 00:45:56.924607 kernel: #3 Sep 10 00:45:56.924617 kernel: kvm-clock: cpu 3, msr 6c19f0c1, secondary cpu clock Sep 10 00:45:56.924626 kernel: kvm-guest: setup async PF for cpu 3 Sep 10 00:45:56.924635 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Sep 10 00:45:56.924646 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:45:56.924655 kernel: smpboot: Max logical packages: 1 Sep 10 00:45:56.924665 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 10 00:45:56.924674 kernel: devtmpfs: initialized Sep 10 00:45:56.924684 kernel: x86/mm: Memory block size: 128MB Sep 10 00:45:56.924693 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:45:56.924703 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:45:56.924712 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:45:56.924721 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:45:56.924733 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:45:56.924742 kernel: audit: type=2000 audit(1757465157.019:1): state=initialized audit_enabled=0 res=1 Sep 10 00:45:56.924751 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:45:56.924770 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 00:45:56.924798 kernel: cpuidle: using governor menu Sep 10 00:45:56.924831 kernel: ACPI: bus type PCI registered Sep 10 00:45:56.924852 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:45:56.924862 kernel: dca service started, version 1.12.1 Sep 10 00:45:56.924871 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 10 00:45:56.924884 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 10 00:45:56.924893 kernel: PCI: Using configuration type 1 for base access Sep 10 00:45:56.924902 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 00:45:56.924911 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:45:56.924920 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:45:56.924930 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:45:56.924939 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:45:56.924949 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:45:56.924958 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 10 00:45:56.924982 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 10 00:45:56.924991 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 10 00:45:56.925000 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:45:56.925009 kernel: ACPI: Interpreter enabled Sep 10 00:45:56.925018 kernel: ACPI: PM: (supports S0 S3 S5) Sep 10 00:45:56.925028 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 00:45:56.925037 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 00:45:56.925046 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 00:45:56.925072 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:45:56.926327 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:45:56.926471 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 10 00:45:56.926587 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 10 00:45:56.926602 kernel: PCI host bridge to bus 0000:00 Sep 10 00:45:56.926739 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 00:45:56.926835 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 00:45:56.926987 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 00:45:56.927110 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 10 00:45:56.927210 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 10 00:45:56.927307 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 10 00:45:56.927409 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:45:56.928130 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 10 00:45:56.928269 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 10 00:45:56.928379 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 10 00:45:56.928482 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 10 00:45:56.928584 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 10 00:45:56.928686 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 00:45:56.928811 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:45:56.928918 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 10 00:45:56.929039 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 10 00:45:56.929168 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 10 00:45:56.929288 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 10 00:45:56.929393 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 10 00:45:56.929497 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 10 00:45:56.929599 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 10 00:45:56.929722 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 10 00:45:56.929830 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 10 00:45:56.929933 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 10 00:45:56.930050 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 10 00:45:56.930218 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 10 00:45:56.930370 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 10 00:45:56.930475 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 00:45:56.930608 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 10 00:45:56.930716 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 10 00:45:56.930817 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 10 00:45:56.930937 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 10 00:45:56.931050 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 10 00:45:56.931080 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 00:45:56.931091 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 00:45:56.931100 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 00:45:56.931109 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 00:45:56.931122 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 00:45:56.931131 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 00:45:56.931141 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 00:45:56.931150 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 00:45:56.931159 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 00:45:56.931168 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 00:45:56.931178 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 00:45:56.931187 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 00:45:56.931196 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 00:45:56.931208 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 00:45:56.931217 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 00:45:56.931226 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 00:45:56.931235 kernel: iommu: Default domain type: Translated Sep 10 00:45:56.931244 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 00:45:56.931361 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 00:45:56.931487 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 00:45:56.931595 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 00:45:56.931612 kernel: vgaarb: loaded Sep 10 00:45:56.931621 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 10 00:45:56.931631 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 10 00:45:56.931640 kernel: PTP clock support registered Sep 10 00:45:56.931649 kernel: PCI: Using ACPI for IRQ routing Sep 10 00:45:56.931659 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 00:45:56.931668 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 10 00:45:56.931678 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 10 00:45:56.931687 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 10 00:45:56.931698 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 10 00:45:56.931707 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 00:45:56.931717 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:45:56.931726 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:45:56.931736 kernel: pnp: PnP ACPI init Sep 10 00:45:56.931862 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 10 00:45:56.931877 kernel: pnp: PnP ACPI: found 6 devices Sep 10 00:45:56.931887 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 00:45:56.931899 kernel: NET: Registered PF_INET protocol family Sep 10 00:45:56.931909 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:45:56.931918 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:45:56.931928 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:45:56.931937 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:45:56.931947 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 10 00:45:56.931956 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:45:56.931973 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:45:56.931983 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:45:56.931994 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:45:56.932003 kernel: NET: Registered PF_XDP protocol family Sep 10 00:45:56.932123 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 00:45:56.932217 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 00:45:56.932322 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 00:45:56.932422 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 10 00:45:56.932512 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 10 00:45:56.932600 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 10 00:45:56.932617 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:45:56.932626 kernel: Initialise system trusted keyrings Sep 10 00:45:56.932635 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:45:56.932645 kernel: Key type asymmetric registered Sep 10 00:45:56.932654 kernel: Asymmetric key parser 'x509' registered Sep 10 00:45:56.932663 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 00:45:56.932673 kernel: io scheduler mq-deadline registered Sep 10 00:45:56.932682 kernel: io scheduler kyber registered Sep 10 00:45:56.932691 kernel: io scheduler bfq registered Sep 10 00:45:56.932701 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 00:45:56.932712 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 00:45:56.932722 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 00:45:56.932731 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 00:45:56.932740 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:45:56.932750 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 00:45:56.932759 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 00:45:56.932768 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 00:45:56.932778 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 00:45:56.932885 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 10 00:45:56.932903 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 00:45:56.933005 kernel: rtc_cmos 00:04: registered as rtc0 Sep 10 00:45:56.933117 kernel: rtc_cmos 00:04: setting system clock to 2025-09-10T00:45:56 UTC (1757465156) Sep 10 00:45:56.933210 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 10 00:45:56.933223 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:45:56.933233 kernel: Segment Routing with IPv6 Sep 10 00:45:56.933242 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:45:56.933279 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:45:56.933303 kernel: Key type dns_resolver registered Sep 10 00:45:56.933315 kernel: IPI shorthand broadcast: enabled Sep 10 00:45:56.933326 kernel: sched_clock: Marking stable (460519654, 100637200)->(628352251, -67195397) Sep 10 00:45:56.933338 kernel: registered taskstats version 1 Sep 10 00:45:56.933350 kernel: Loading compiled-in X.509 certificates Sep 10 00:45:56.933362 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.191-flatcar: 3af57cd809cc9e43d7af9f276bb20b532a4147af' Sep 10 00:45:56.933374 kernel: Key type .fscrypt registered Sep 10 00:45:56.933385 kernel: Key type fscrypt-provisioning registered Sep 10 00:45:56.933397 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:45:56.933412 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:45:56.933424 kernel: ima: No architecture policies found Sep 10 00:45:56.933436 kernel: clk: Disabling unused clocks Sep 10 00:45:56.933447 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 10 00:45:56.933459 kernel: Write protecting the kernel read-only data: 28672k Sep 10 00:45:56.933470 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 10 00:45:56.933482 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 10 00:45:56.933494 kernel: Run /init as init process Sep 10 00:45:56.933508 kernel: with arguments: Sep 10 00:45:56.933519 kernel: /init Sep 10 00:45:56.933530 kernel: with environment: Sep 10 00:45:56.933540 kernel: HOME=/ Sep 10 00:45:56.933548 kernel: TERM=linux Sep 10 00:45:56.933558 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:45:56.933570 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 10 00:45:56.933582 systemd[1]: Detected virtualization kvm. Sep 10 00:45:56.933601 systemd[1]: Detected architecture x86-64. Sep 10 00:45:56.933612 systemd[1]: Running in initrd. Sep 10 00:45:56.933622 systemd[1]: No hostname configured, using default hostname. Sep 10 00:45:56.933632 systemd[1]: Hostname set to . Sep 10 00:45:56.933642 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:45:56.933652 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:45:56.933662 systemd[1]: Started systemd-ask-password-console.path. Sep 10 00:45:56.933673 systemd[1]: Reached target cryptsetup.target. Sep 10 00:45:56.933683 systemd[1]: Reached target paths.target. Sep 10 00:45:56.933696 systemd[1]: Reached target slices.target. Sep 10 00:45:56.933714 systemd[1]: Reached target swap.target. Sep 10 00:45:56.933726 systemd[1]: Reached target timers.target. Sep 10 00:45:56.933737 systemd[1]: Listening on iscsid.socket. Sep 10 00:45:56.933747 systemd[1]: Listening on iscsiuio.socket. Sep 10 00:45:56.933760 systemd[1]: Listening on systemd-journald-audit.socket. Sep 10 00:45:56.933770 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 10 00:45:56.933781 systemd[1]: Listening on systemd-journald.socket. Sep 10 00:45:56.933791 systemd[1]: Listening on systemd-networkd.socket. Sep 10 00:45:56.933802 systemd[1]: Listening on systemd-udevd-control.socket. Sep 10 00:45:56.933812 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 10 00:45:56.933822 systemd[1]: Reached target sockets.target. Sep 10 00:45:56.933833 systemd[1]: Starting kmod-static-nodes.service... Sep 10 00:45:56.933843 systemd[1]: Finished network-cleanup.service. Sep 10 00:45:56.933855 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:45:56.933866 systemd[1]: Starting systemd-journald.service... Sep 10 00:45:56.933876 systemd[1]: Starting systemd-modules-load.service... Sep 10 00:45:56.933887 systemd[1]: Starting systemd-resolved.service... Sep 10 00:45:56.933897 systemd[1]: Starting systemd-vconsole-setup.service... Sep 10 00:45:56.933910 systemd[1]: Finished kmod-static-nodes.service. Sep 10 00:45:56.933920 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:45:56.933931 kernel: audit: type=1130 audit(1757465156.921:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:56.933943 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 10 00:45:56.933956 systemd-journald[198]: Journal started Sep 10 00:45:56.934028 systemd-journald[198]: Runtime Journal (/run/log/journal/69e2fddfd45342228e0f7d258dece086) is 6.0M, max 48.5M, 42.5M free. Sep 10 00:45:56.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:56.930187 systemd-modules-load[199]: Inserted module 'overlay' Sep 10 00:45:56.967869 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:45:56.967902 systemd[1]: Started systemd-journald.service. Sep 10 00:45:56.947888 systemd-resolved[200]: Positive Trust Anchors: Sep 10 00:45:56.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:56.947898 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:45:56.947937 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 10 00:45:56.951086 systemd-resolved[200]: Defaulting to hostname 'linux'. Sep 10 00:45:56.982589 kernel: audit: type=1130 audit(1757465156.967:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:56.968466 systemd[1]: Started systemd-resolved.service. Sep 10 00:45:56.984925 kernel: Bridge firewalling registered Sep 10 00:45:56.973842 systemd[1]: Finished systemd-vconsole-setup.service. Sep 10 00:45:56.975248 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 10 00:45:56.983226 systemd[1]: Reached target nss-lookup.target. Sep 10 00:45:56.984914 systemd-modules-load[199]: Inserted module 'br_netfilter' Sep 10 00:45:56.986210 systemd[1]: Starting dracut-cmdline-ask.service... Sep 10 00:45:56.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:56.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:56.997079 kernel: audit: type=1130 audit(1757465156.973:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:56.997110 kernel: audit: type=1130 audit(1757465156.973:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:56.997130 kernel: audit: type=1130 audit(1757465156.982:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:56.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:57.001672 systemd[1]: Finished dracut-cmdline-ask.service. Sep 10 00:45:57.007422 kernel: audit: type=1130 audit(1757465157.002:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:57.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:57.006041 systemd[1]: Starting dracut-cmdline.service... Sep 10 00:45:57.013089 kernel: SCSI subsystem initialized Sep 10 00:45:57.015174 dracut-cmdline[216]: dracut-dracut-053 Sep 10 00:45:57.016977 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:45:57.026085 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:45:57.026128 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:45:57.026141 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 10 00:45:57.030388 systemd-modules-load[199]: Inserted module 'dm_multipath' Sep 10 00:45:57.032095 systemd[1]: Finished systemd-modules-load.service. Sep 10 00:45:57.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:57.033788 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:45:57.038719 kernel: audit: type=1130 audit(1757465157.032:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:57.043567 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:45:57.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:57.048083 kernel: audit: type=1130 audit(1757465157.044:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:57.077101 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:45:57.094102 kernel: iscsi: registered transport (tcp) Sep 10 00:45:57.115097 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:45:57.115143 kernel: QLogic iSCSI HBA Driver Sep 10 00:45:57.142556 systemd[1]: Finished dracut-cmdline.service. Sep 10 00:45:57.147809 kernel: audit: type=1130 audit(1757465157.142:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:57.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:57.143662 systemd[1]: Starting dracut-pre-udev.service... Sep 10 00:45:57.190096 kernel: raid6: avx2x4 gen() 30074 MB/s Sep 10 00:45:57.207087 kernel: raid6: avx2x4 xor() 7237 MB/s Sep 10 00:45:57.224084 kernel: raid6: avx2x2 gen() 31258 MB/s Sep 10 00:45:57.241083 kernel: raid6: avx2x2 xor() 18706 MB/s Sep 10 00:45:57.258082 kernel: raid6: avx2x1 gen() 25795 MB/s Sep 10 00:45:57.275083 kernel: raid6: avx2x1 xor() 14841 MB/s Sep 10 00:45:57.292087 kernel: raid6: sse2x4 gen() 14311 MB/s Sep 10 00:45:57.309083 kernel: raid6: sse2x4 xor() 7059 MB/s Sep 10 00:45:57.326085 kernel: raid6: sse2x2 gen() 15947 MB/s Sep 10 00:45:57.343083 kernel: raid6: sse2x2 xor() 9563 MB/s Sep 10 00:45:57.360089 kernel: raid6: sse2x1 gen() 11767 MB/s Sep 10 00:45:57.377446 kernel: raid6: sse2x1 xor() 7601 MB/s Sep 10 00:45:57.377468 kernel: raid6: using algorithm avx2x2 gen() 31258 MB/s Sep 10 00:45:57.377480 kernel: raid6: .... xor() 18706 MB/s, rmw enabled Sep 10 00:45:57.378173 kernel: raid6: using avx2x2 recovery algorithm Sep 10 00:45:57.391089 kernel: xor: automatically using best checksumming function avx Sep 10 00:45:57.482097 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 10 00:45:57.490011 systemd[1]: Finished dracut-pre-udev.service. Sep 10 00:45:57.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:57.491000 audit: BPF prog-id=7 op=LOAD Sep 10 00:45:57.491000 audit: BPF prog-id=8 op=LOAD Sep 10 00:45:57.492033 systemd[1]: Starting systemd-udevd.service... Sep 10 00:45:57.505079 systemd-udevd[400]: Using default interface naming scheme 'v252'. Sep 10 00:45:57.509514 systemd[1]: Started systemd-udevd.service. Sep 10 00:45:57.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:57.511458 systemd[1]: Starting dracut-pre-trigger.service... Sep 10 00:45:57.524201 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Sep 10 00:45:57.546946 systemd[1]: Finished dracut-pre-trigger.service. Sep 10 00:45:57.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:57.548690 systemd[1]: Starting systemd-udev-trigger.service... Sep 10 00:45:57.587676 systemd[1]: Finished systemd-udev-trigger.service. Sep 10 00:45:57.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:57.625089 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:45:57.633846 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:45:57.633860 kernel: GPT:9289727 != 19775487 Sep 10 00:45:57.633869 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:45:57.633878 kernel: GPT:9289727 != 19775487 Sep 10 00:45:57.633886 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:45:57.633895 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:45:57.636078 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 00:45:57.643073 kernel: libata version 3.00 loaded. Sep 10 00:45:57.654459 kernel: AVX2 version of gcm_enc/dec engaged. Sep 10 00:45:57.654525 kernel: AES CTR mode by8 optimization enabled Sep 10 00:45:57.660079 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 00:45:57.674680 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 00:45:57.674697 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 10 00:45:57.674793 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 00:45:57.675001 kernel: scsi host0: ahci Sep 10 00:45:57.675117 kernel: scsi host1: ahci Sep 10 00:45:57.675213 kernel: scsi host2: ahci Sep 10 00:45:57.675302 kernel: scsi host3: ahci Sep 10 00:45:57.675390 kernel: scsi host4: ahci Sep 10 00:45:57.675492 kernel: scsi host5: ahci Sep 10 00:45:57.675594 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 10 00:45:57.675604 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 10 00:45:57.675613 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 10 00:45:57.675622 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 10 00:45:57.675631 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 10 00:45:57.675639 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 10 00:45:57.672414 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 10 00:45:57.718712 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (446) Sep 10 00:45:57.723581 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 10 00:45:57.725533 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 10 00:45:57.736003 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 10 00:45:57.739987 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 10 00:45:57.741883 systemd[1]: Starting disk-uuid.service... Sep 10 00:45:57.980106 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 00:45:57.980186 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 00:45:57.985119 disk-uuid[525]: Primary Header is updated. Sep 10 00:45:57.985119 disk-uuid[525]: Secondary Entries is updated. Sep 10 00:45:57.985119 disk-uuid[525]: Secondary Header is updated. Sep 10 00:45:57.992586 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 00:45:57.992608 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 10 00:45:57.992619 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 00:45:57.993626 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 00:45:57.993654 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:45:57.993694 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 10 00:45:57.996089 kernel: ata3.00: applying bridge limits Sep 10 00:45:57.997087 kernel: ata3.00: configured for UDMA/100 Sep 10 00:45:57.998099 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 00:45:57.999132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:45:58.084112 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 10 00:45:58.102889 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 00:45:58.102909 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 10 00:45:59.001089 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:45:59.001604 disk-uuid[526]: The operation has completed successfully. Sep 10 00:45:59.028492 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:45:59.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.028592 systemd[1]: Finished disk-uuid.service. Sep 10 00:45:59.039452 systemd[1]: Starting verity-setup.service... Sep 10 00:45:59.056090 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 10 00:45:59.083033 systemd[1]: Found device dev-mapper-usr.device. Sep 10 00:45:59.085989 systemd[1]: Mounting sysusr-usr.mount... Sep 10 00:45:59.088870 systemd[1]: Finished verity-setup.service. Sep 10 00:45:59.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.192089 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 10 00:45:59.192146 systemd[1]: Mounted sysusr-usr.mount. Sep 10 00:45:59.192559 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 10 00:45:59.193318 systemd[1]: Starting ignition-setup.service... Sep 10 00:45:59.198320 systemd[1]: Starting parse-ip-for-networkd.service... Sep 10 00:45:59.205662 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:45:59.205705 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:45:59.205715 kernel: BTRFS info (device vda6): has skinny extents Sep 10 00:45:59.217268 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:45:59.228397 systemd[1]: Finished ignition-setup.service. Sep 10 00:45:59.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.229604 systemd[1]: Starting ignition-fetch-offline.service... Sep 10 00:45:59.364164 systemd[1]: Finished parse-ip-for-networkd.service. Sep 10 00:45:59.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.366000 audit: BPF prog-id=9 op=LOAD Sep 10 00:45:59.367434 systemd[1]: Starting systemd-networkd.service... Sep 10 00:45:59.396588 systemd-networkd[716]: lo: Link UP Sep 10 00:45:59.396597 systemd-networkd[716]: lo: Gained carrier Sep 10 00:45:59.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.397103 systemd-networkd[716]: Enumeration completed Sep 10 00:45:59.397209 systemd[1]: Started systemd-networkd.service. Sep 10 00:45:59.397700 systemd[1]: Reached target network.target. Sep 10 00:45:59.398051 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:45:59.401603 systemd-networkd[716]: eth0: Link UP Sep 10 00:45:59.401608 systemd-networkd[716]: eth0: Gained carrier Sep 10 00:45:59.407334 systemd[1]: Starting iscsiuio.service... Sep 10 00:45:59.416038 ignition[637]: Ignition 2.14.0 Sep 10 00:45:59.416050 ignition[637]: Stage: fetch-offline Sep 10 00:45:59.416197 ignition[637]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:45:59.416207 ignition[637]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:45:59.416331 ignition[637]: parsed url from cmdline: "" Sep 10 00:45:59.416335 ignition[637]: no config URL provided Sep 10 00:45:59.416342 ignition[637]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:45:59.416351 ignition[637]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:45:59.416375 ignition[637]: op(1): [started] loading QEMU firmware config module Sep 10 00:45:59.416391 ignition[637]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:45:59.426030 ignition[637]: op(1): [finished] loading QEMU firmware config module Sep 10 00:45:59.465726 ignition[637]: parsing config with SHA512: 8616f6468d5dbcef40d9c344f14d6f876b50764cd544d49f19b7410f1508a3f53b24ca378ff1f9a44bf88d51046afe83ee46f04f6ab685773285710f322a7d11 Sep 10 00:45:59.514328 unknown[637]: fetched base config from "system" Sep 10 00:45:59.514343 unknown[637]: fetched user config from "qemu" Sep 10 00:45:59.516526 ignition[637]: fetch-offline: fetch-offline passed Sep 10 00:45:59.517452 ignition[637]: Ignition finished successfully Sep 10 00:45:59.520261 systemd[1]: Finished ignition-fetch-offline.service. Sep 10 00:45:59.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.522266 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:45:59.524360 systemd[1]: Starting ignition-kargs.service... Sep 10 00:45:59.528257 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:45:59.538729 systemd[1]: Started iscsiuio.service. Sep 10 00:45:59.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.540820 systemd[1]: Starting iscsid.service... Sep 10 00:45:59.543495 ignition[722]: Ignition 2.14.0 Sep 10 00:45:59.543505 ignition[722]: Stage: kargs Sep 10 00:45:59.543646 ignition[722]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:45:59.543656 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:45:59.544919 ignition[722]: kargs: kargs passed Sep 10 00:45:59.547861 iscsid[729]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 10 00:45:59.547861 iscsid[729]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 10 00:45:59.547861 iscsid[729]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 10 00:45:59.547861 iscsid[729]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 10 00:45:59.547861 iscsid[729]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 10 00:45:59.547861 iscsid[729]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 10 00:45:59.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.544968 ignition[722]: Ignition finished successfully Sep 10 00:45:59.548680 systemd[1]: Finished ignition-kargs.service. Sep 10 00:45:59.550875 systemd[1]: Started iscsid.service. Sep 10 00:45:59.558237 systemd[1]: Starting dracut-initqueue.service... Sep 10 00:45:59.561213 systemd[1]: Starting ignition-disks.service... Sep 10 00:45:59.572322 systemd[1]: Finished dracut-initqueue.service. Sep 10 00:45:59.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.572993 systemd[1]: Reached target remote-fs-pre.target. Sep 10 00:45:59.574487 systemd[1]: Reached target remote-cryptsetup.target. Sep 10 00:45:59.576325 systemd[1]: Reached target remote-fs.target. Sep 10 00:45:59.579074 systemd[1]: Starting dracut-pre-mount.service... Sep 10 00:45:59.588928 systemd[1]: Finished dracut-pre-mount.service. Sep 10 00:45:59.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.714824 ignition[731]: Ignition 2.14.0 Sep 10 00:45:59.714835 ignition[731]: Stage: disks Sep 10 00:45:59.714969 ignition[731]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:45:59.714982 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:45:59.724577 ignition[731]: disks: disks passed Sep 10 00:45:59.724646 ignition[731]: Ignition finished successfully Sep 10 00:45:59.727274 systemd[1]: Finished ignition-disks.service. Sep 10 00:45:59.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.729028 systemd[1]: Reached target initrd-root-device.target. Sep 10 00:45:59.730709 systemd[1]: Reached target local-fs-pre.target. Sep 10 00:45:59.731411 systemd[1]: Reached target local-fs.target. Sep 10 00:45:59.731708 systemd[1]: Reached target sysinit.target. Sep 10 00:45:59.732053 systemd[1]: Reached target basic.target. Sep 10 00:45:59.736229 systemd[1]: Starting systemd-fsck-root.service... Sep 10 00:45:59.750227 systemd-resolved[200]: Detected conflict on linux IN A 10.0.0.93 Sep 10 00:45:59.750242 systemd-resolved[200]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Sep 10 00:45:59.752199 systemd-fsck[751]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 10 00:45:59.801422 systemd[1]: Finished systemd-fsck-root.service. Sep 10 00:45:59.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.802992 systemd[1]: Mounting sysroot.mount... Sep 10 00:45:59.811717 systemd[1]: Mounted sysroot.mount. Sep 10 00:45:59.812624 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 10 00:45:59.812455 systemd[1]: Reached target initrd-root-fs.target. Sep 10 00:45:59.814328 systemd[1]: Mounting sysroot-usr.mount... Sep 10 00:45:59.815732 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 10 00:45:59.815777 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:45:59.815807 systemd[1]: Reached target ignition-diskful.target. Sep 10 00:45:59.817900 systemd[1]: Mounted sysroot-usr.mount. Sep 10 00:45:59.822435 systemd[1]: Starting initrd-setup-root.service... Sep 10 00:45:59.829789 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:45:59.834303 initrd-setup-root[769]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:45:59.838498 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:45:59.842562 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:45:59.874851 systemd[1]: Finished initrd-setup-root.service. Sep 10 00:45:59.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.875937 systemd[1]: Starting ignition-mount.service... Sep 10 00:45:59.878479 systemd[1]: Starting sysroot-boot.service... Sep 10 00:45:59.883598 bash[802]: umount: /sysroot/usr/share/oem: not mounted. Sep 10 00:45:59.894400 ignition[804]: INFO : Ignition 2.14.0 Sep 10 00:45:59.894400 ignition[804]: INFO : Stage: mount Sep 10 00:45:59.896484 ignition[804]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:45:59.896484 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:45:59.896484 ignition[804]: INFO : mount: mount passed Sep 10 00:45:59.896484 ignition[804]: INFO : Ignition finished successfully Sep 10 00:45:59.902206 systemd[1]: Finished ignition-mount.service. Sep 10 00:45:59.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:45:59.906273 systemd[1]: Finished sysroot-boot.service. Sep 10 00:45:59.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:00.099514 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 10 00:46:00.108081 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Sep 10 00:46:00.110495 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:46:00.110520 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:46:00.110530 kernel: BTRFS info (device vda6): has skinny extents Sep 10 00:46:00.114776 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 10 00:46:00.116595 systemd[1]: Starting ignition-files.service... Sep 10 00:46:00.134024 ignition[832]: INFO : Ignition 2.14.0 Sep 10 00:46:00.134024 ignition[832]: INFO : Stage: files Sep 10 00:46:00.135753 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:46:00.135753 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:46:00.135753 ignition[832]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:46:00.139669 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:46:00.139669 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:46:00.142736 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:46:00.142736 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:46:00.142736 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:46:00.142736 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 10 00:46:00.142736 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 10 00:46:00.142736 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:46:00.142736 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 10 00:46:00.141349 unknown[832]: wrote ssh authorized keys file for user: core Sep 10 00:46:00.203366 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 00:46:00.876911 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:46:00.879626 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:46:00.879626 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 10 00:46:01.013554 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 10 00:46:01.302479 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:46:01.302479 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:46:01.306696 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:46:01.306696 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:46:01.306696 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:46:01.306696 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:46:01.306696 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:46:01.306696 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:46:01.306696 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:46:01.306696 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:46:01.306696 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:46:01.306696 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:46:01.306696 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:46:01.306696 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:46:01.306696 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 10 00:46:01.429284 systemd-networkd[716]: eth0: Gained IPv6LL Sep 10 00:46:01.713792 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 10 00:46:03.357029 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:46:03.357029 ignition[832]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 10 00:46:03.388995 ignition[832]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 10 00:46:03.392006 ignition[832]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 10 00:46:03.392006 ignition[832]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 10 00:46:03.392006 ignition[832]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 10 00:46:03.398173 ignition[832]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:46:03.400707 ignition[832]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:46:03.400707 ignition[832]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 10 00:46:03.400707 ignition[832]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 10 00:46:03.400707 ignition[832]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:46:03.410649 ignition[832]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:46:03.410649 ignition[832]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 10 00:46:03.410649 ignition[832]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:46:03.410649 ignition[832]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:46:03.556962 ignition[832]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:46:03.560291 ignition[832]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:46:03.560291 ignition[832]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:46:03.560291 ignition[832]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:46:03.566598 ignition[832]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:46:03.566598 ignition[832]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:46:03.566598 ignition[832]: INFO : files: files passed Sep 10 00:46:03.571476 ignition[832]: INFO : Ignition finished successfully Sep 10 00:46:03.573413 systemd[1]: Finished ignition-files.service. Sep 10 00:46:03.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.575951 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 10 00:46:03.582141 kernel: kauditd_printk_skb: 24 callbacks suppressed Sep 10 00:46:03.582169 kernel: audit: type=1130 audit(1757465163.574:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.580541 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 10 00:46:03.581885 systemd[1]: Starting ignition-quench.service... Sep 10 00:46:03.587462 initrd-setup-root-after-ignition[856]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 10 00:46:03.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.591166 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:46:03.601844 kernel: audit: type=1130 audit(1757465163.590:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.601878 kernel: audit: type=1131 audit(1757465163.590:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.587701 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:46:03.608972 kernel: audit: type=1130 audit(1757465163.601:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.587845 systemd[1]: Finished ignition-quench.service. Sep 10 00:46:03.591438 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 10 00:46:03.602255 systemd[1]: Reached target ignition-complete.target. Sep 10 00:46:03.610447 systemd[1]: Starting initrd-parse-etc.service... Sep 10 00:46:03.632201 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:46:03.632345 systemd[1]: Finished initrd-parse-etc.service. Sep 10 00:46:03.643487 kernel: audit: type=1130 audit(1757465163.634:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.643515 kernel: audit: type=1131 audit(1757465163.634:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.634566 systemd[1]: Reached target initrd-fs.target. Sep 10 00:46:03.642521 systemd[1]: Reached target initrd.target. Sep 10 00:46:03.643135 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 10 00:46:03.644352 systemd[1]: Starting dracut-pre-pivot.service... Sep 10 00:46:03.659706 systemd[1]: Finished dracut-pre-pivot.service. Sep 10 00:46:03.666208 kernel: audit: type=1130 audit(1757465163.660:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.661844 systemd[1]: Starting initrd-cleanup.service... Sep 10 00:46:03.673141 systemd[1]: Stopped target nss-lookup.target. Sep 10 00:46:03.673620 systemd[1]: Stopped target remote-cryptsetup.target. Sep 10 00:46:03.673861 systemd[1]: Stopped target timers.target. Sep 10 00:46:03.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.676953 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:46:03.685097 kernel: audit: type=1131 audit(1757465163.678:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.677122 systemd[1]: Stopped dracut-pre-pivot.service. Sep 10 00:46:03.679101 systemd[1]: Stopped target initrd.target. Sep 10 00:46:03.684724 systemd[1]: Stopped target basic.target. Sep 10 00:46:03.685580 systemd[1]: Stopped target ignition-complete.target. Sep 10 00:46:03.687856 systemd[1]: Stopped target ignition-diskful.target. Sep 10 00:46:03.689595 systemd[1]: Stopped target initrd-root-device.target. Sep 10 00:46:03.691593 systemd[1]: Stopped target remote-fs.target. Sep 10 00:46:03.693176 systemd[1]: Stopped target remote-fs-pre.target. Sep 10 00:46:03.695156 systemd[1]: Stopped target sysinit.target. Sep 10 00:46:03.697236 systemd[1]: Stopped target local-fs.target. Sep 10 00:46:03.699137 systemd[1]: Stopped target local-fs-pre.target. Sep 10 00:46:03.709077 kernel: audit: type=1131 audit(1757465163.704:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.700927 systemd[1]: Stopped target swap.target. Sep 10 00:46:03.702631 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:46:03.716689 kernel: audit: type=1131 audit(1757465163.711:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.702799 systemd[1]: Stopped dracut-pre-mount.service. Sep 10 00:46:03.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.704621 systemd[1]: Stopped target cryptsetup.target. Sep 10 00:46:03.709666 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:46:03.709827 systemd[1]: Stopped dracut-initqueue.service. Sep 10 00:46:03.711618 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:46:03.711757 systemd[1]: Stopped ignition-fetch-offline.service. Sep 10 00:46:03.717614 systemd[1]: Stopped target paths.target. Sep 10 00:46:03.718841 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:46:03.724180 systemd[1]: Stopped systemd-ask-password-console.path. Sep 10 00:46:03.729289 systemd[1]: Stopped target slices.target. Sep 10 00:46:03.731258 systemd[1]: Stopped target sockets.target. Sep 10 00:46:03.733387 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:46:03.734897 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 10 00:46:03.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.737485 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:46:03.737646 systemd[1]: Stopped ignition-files.service. Sep 10 00:46:03.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.742504 systemd[1]: Stopping ignition-mount.service... Sep 10 00:46:03.744745 systemd[1]: Stopping iscsid.service... Sep 10 00:46:03.745847 iscsid[729]: iscsid shutting down. Sep 10 00:46:03.749093 systemd[1]: Stopping sysroot-boot.service... Sep 10 00:46:03.750079 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:46:03.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.754296 ignition[872]: INFO : Ignition 2.14.0 Sep 10 00:46:03.754296 ignition[872]: INFO : Stage: umount Sep 10 00:46:03.754296 ignition[872]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:46:03.754296 ignition[872]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:46:03.754296 ignition[872]: INFO : umount: umount passed Sep 10 00:46:03.754296 ignition[872]: INFO : Ignition finished successfully Sep 10 00:46:03.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.750400 systemd[1]: Stopped systemd-udev-trigger.service. Sep 10 00:46:03.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.752136 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:46:03.752344 systemd[1]: Stopped dracut-pre-trigger.service. Sep 10 00:46:03.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.757731 systemd[1]: iscsid.service: Deactivated successfully. Sep 10 00:46:03.757902 systemd[1]: Stopped iscsid.service. Sep 10 00:46:03.759991 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:46:03.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.760143 systemd[1]: Stopped ignition-mount.service. Sep 10 00:46:03.763953 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:46:03.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.764095 systemd[1]: Finished initrd-cleanup.service. Sep 10 00:46:03.767422 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:46:03.768726 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:46:03.768775 systemd[1]: Closed iscsid.socket. Sep 10 00:46:03.769692 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:46:03.769755 systemd[1]: Stopped ignition-disks.service. Sep 10 00:46:03.770425 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:46:03.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.770465 systemd[1]: Stopped ignition-kargs.service. Sep 10 00:46:03.770540 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:46:03.770575 systemd[1]: Stopped ignition-setup.service. Sep 10 00:46:03.771482 systemd[1]: Stopping iscsiuio.service... Sep 10 00:46:03.772082 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:46:03.772176 systemd[1]: Stopped sysroot-boot.service. Sep 10 00:46:03.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.772406 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:46:03.772442 systemd[1]: Stopped initrd-setup-root.service. Sep 10 00:46:03.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.775242 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 10 00:46:03.775321 systemd[1]: Stopped iscsiuio.service. Sep 10 00:46:03.776847 systemd[1]: Stopped target network.target. Sep 10 00:46:03.778491 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:46:03.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.778526 systemd[1]: Closed iscsiuio.socket. Sep 10 00:46:03.780084 systemd[1]: Stopping systemd-networkd.service... Sep 10 00:46:03.781802 systemd[1]: Stopping systemd-resolved.service... Sep 10 00:46:03.784137 systemd-networkd[716]: eth0: DHCPv6 lease lost Sep 10 00:46:03.808000 audit: BPF prog-id=9 op=UNLOAD Sep 10 00:46:03.808000 audit: BPF prog-id=6 op=UNLOAD Sep 10 00:46:03.785452 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:46:03.785573 systemd[1]: Stopped systemd-networkd.service. Sep 10 00:46:03.788140 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:46:03.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.788197 systemd[1]: Closed systemd-networkd.socket. Sep 10 00:46:03.790667 systemd[1]: Stopping network-cleanup.service... Sep 10 00:46:03.793115 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:46:03.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.793199 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 10 00:46:03.795264 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:46:03.795324 systemd[1]: Stopped systemd-sysctl.service. Sep 10 00:46:03.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.796838 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:46:03.796897 systemd[1]: Stopped systemd-modules-load.service. Sep 10 00:46:03.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.799661 systemd[1]: Stopping systemd-udevd.service... Sep 10 00:46:03.802401 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 00:46:03.803125 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:46:03.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.803271 systemd[1]: Stopped systemd-resolved.service. Sep 10 00:46:03.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.811299 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:46:03.811503 systemd[1]: Stopped systemd-udevd.service. Sep 10 00:46:03.815822 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:46:03.815964 systemd[1]: Stopped network-cleanup.service. Sep 10 00:46:03.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:03.817654 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:46:03.817721 systemd[1]: Closed systemd-udevd-control.socket. Sep 10 00:46:03.819376 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:46:03.819420 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 10 00:46:03.821212 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:46:03.821278 systemd[1]: Stopped dracut-pre-udev.service. Sep 10 00:46:03.823230 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:46:03.823282 systemd[1]: Stopped dracut-cmdline.service. Sep 10 00:46:03.824842 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:46:03.824898 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 10 00:46:03.828180 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 10 00:46:03.829735 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 00:46:03.829828 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 10 00:46:03.833458 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:46:03.833548 systemd[1]: Stopped kmod-static-nodes.service. Sep 10 00:46:03.858000 audit: BPF prog-id=8 op=UNLOAD Sep 10 00:46:03.858000 audit: BPF prog-id=7 op=UNLOAD Sep 10 00:46:03.834676 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:46:03.860000 audit: BPF prog-id=5 op=UNLOAD Sep 10 00:46:03.860000 audit: BPF prog-id=4 op=UNLOAD Sep 10 00:46:03.834739 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 10 00:46:03.860000 audit: BPF prog-id=3 op=UNLOAD Sep 10 00:46:03.837853 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 10 00:46:03.838607 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:46:03.838718 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 10 00:46:03.840784 systemd[1]: Reached target initrd-switch-root.target. Sep 10 00:46:03.843445 systemd[1]: Starting initrd-switch-root.service... Sep 10 00:46:03.852861 systemd[1]: Switching root. Sep 10 00:46:03.877725 systemd-journald[198]: Journal stopped Sep 10 00:46:09.619201 systemd-journald[198]: Received SIGTERM from PID 1 (n/a). Sep 10 00:46:09.619251 kernel: SELinux: Class mctp_socket not defined in policy. Sep 10 00:46:09.619266 kernel: SELinux: Class anon_inode not defined in policy. Sep 10 00:46:09.619276 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 10 00:46:09.619287 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:46:09.619296 kernel: SELinux: policy capability open_perms=1 Sep 10 00:46:09.619306 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:46:09.619316 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:46:09.619328 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:46:09.619342 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:46:09.619351 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:46:09.619361 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:46:09.619372 systemd[1]: Successfully loaded SELinux policy in 57.226ms. Sep 10 00:46:09.619389 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.342ms. Sep 10 00:46:09.619401 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 10 00:46:09.619412 systemd[1]: Detected virtualization kvm. Sep 10 00:46:09.619423 systemd[1]: Detected architecture x86-64. Sep 10 00:46:09.619434 systemd[1]: Detected first boot. Sep 10 00:46:09.619446 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:46:09.619457 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 10 00:46:09.619467 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:46:09.619478 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:46:09.619492 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:46:09.619505 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:46:09.619516 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:46:09.619528 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 10 00:46:09.619540 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 10 00:46:09.619550 systemd[1]: Created slice system-addon\x2drun.slice. Sep 10 00:46:09.619561 systemd[1]: Created slice system-getty.slice. Sep 10 00:46:09.619571 systemd[1]: Created slice system-modprobe.slice. Sep 10 00:46:09.619582 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 10 00:46:09.619592 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 10 00:46:09.619603 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 10 00:46:09.619613 systemd[1]: Created slice user.slice. Sep 10 00:46:09.619625 systemd[1]: Started systemd-ask-password-console.path. Sep 10 00:46:09.619643 systemd[1]: Started systemd-ask-password-wall.path. Sep 10 00:46:09.619655 systemd[1]: Set up automount boot.automount. Sep 10 00:46:09.619666 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 10 00:46:09.619679 systemd[1]: Reached target integritysetup.target. Sep 10 00:46:09.619689 systemd[1]: Reached target remote-cryptsetup.target. Sep 10 00:46:09.619701 systemd[1]: Reached target remote-fs.target. Sep 10 00:46:09.619711 systemd[1]: Reached target slices.target. Sep 10 00:46:09.619724 systemd[1]: Reached target swap.target. Sep 10 00:46:09.619735 systemd[1]: Reached target torcx.target. Sep 10 00:46:09.619745 systemd[1]: Reached target veritysetup.target. Sep 10 00:46:09.619756 systemd[1]: Listening on systemd-coredump.socket. Sep 10 00:46:09.619767 systemd[1]: Listening on systemd-initctl.socket. Sep 10 00:46:09.619778 systemd[1]: Listening on systemd-journald-audit.socket. Sep 10 00:46:09.619788 kernel: kauditd_printk_skb: 48 callbacks suppressed Sep 10 00:46:09.619799 kernel: audit: type=1400 audit(1757465169.520:86): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 10 00:46:09.619810 kernel: audit: type=1335 audit(1757465169.520:87): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 10 00:46:09.619821 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 10 00:46:09.619832 systemd[1]: Listening on systemd-journald.socket. Sep 10 00:46:09.619842 systemd[1]: Listening on systemd-networkd.socket. Sep 10 00:46:09.619853 systemd[1]: Listening on systemd-udevd-control.socket. Sep 10 00:46:09.619863 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 10 00:46:09.619874 systemd[1]: Listening on systemd-userdbd.socket. Sep 10 00:46:09.619885 systemd[1]: Mounting dev-hugepages.mount... Sep 10 00:46:09.619895 systemd[1]: Mounting dev-mqueue.mount... Sep 10 00:46:09.619906 systemd[1]: Mounting media.mount... Sep 10 00:46:09.619918 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:46:09.619929 systemd[1]: Mounting sys-kernel-debug.mount... Sep 10 00:46:09.619939 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 10 00:46:09.619950 systemd[1]: Mounting tmp.mount... Sep 10 00:46:09.619961 systemd[1]: Starting flatcar-tmpfiles.service... Sep 10 00:46:09.619971 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:46:09.619982 systemd[1]: Starting kmod-static-nodes.service... Sep 10 00:46:09.619992 systemd[1]: Starting modprobe@configfs.service... Sep 10 00:46:09.620003 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:46:09.620015 systemd[1]: Starting modprobe@drm.service... Sep 10 00:46:09.620026 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:46:09.620037 systemd[1]: Starting modprobe@fuse.service... Sep 10 00:46:09.620047 systemd[1]: Starting modprobe@loop.service... Sep 10 00:46:09.620069 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:46:09.620080 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 10 00:46:09.620091 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 10 00:46:09.620102 systemd[1]: Starting systemd-journald.service... Sep 10 00:46:09.620114 kernel: loop: module loaded Sep 10 00:46:09.620125 kernel: fuse: init (API version 7.34) Sep 10 00:46:09.620135 systemd[1]: Starting systemd-modules-load.service... Sep 10 00:46:09.620146 systemd[1]: Starting systemd-network-generator.service... Sep 10 00:46:09.620157 systemd[1]: Starting systemd-remount-fs.service... Sep 10 00:46:09.620168 kernel: audit: type=1305 audit(1757465169.618:88): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 10 00:46:09.620181 systemd-journald[1013]: Journal started Sep 10 00:46:09.620223 systemd-journald[1013]: Runtime Journal (/run/log/journal/69e2fddfd45342228e0f7d258dece086) is 6.0M, max 48.5M, 42.5M free. Sep 10 00:46:09.520000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 10 00:46:09.520000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 10 00:46:09.618000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 10 00:46:09.618000 audit[1013]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd8c8d23b0 a2=4000 a3=7ffd8c8d244c items=0 ppid=1 pid=1013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:46:09.626825 kernel: audit: type=1300 audit(1757465169.618:88): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd8c8d23b0 a2=4000 a3=7ffd8c8d244c items=0 ppid=1 pid=1013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:46:09.628759 systemd[1]: Starting systemd-udev-trigger.service... Sep 10 00:46:09.618000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 10 00:46:09.631117 kernel: audit: type=1327 audit(1757465169.618:88): proctitle="/usr/lib/systemd/systemd-journald" Sep 10 00:46:09.635181 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:46:09.639079 systemd[1]: Started systemd-journald.service. Sep 10 00:46:09.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.640525 systemd[1]: Mounted dev-hugepages.mount. Sep 10 00:46:09.643098 kernel: audit: type=1130 audit(1757465169.639:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.644049 systemd[1]: Mounted dev-mqueue.mount. Sep 10 00:46:09.645090 systemd[1]: Mounted media.mount. Sep 10 00:46:09.646037 systemd[1]: Mounted sys-kernel-debug.mount. Sep 10 00:46:09.647104 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 10 00:46:09.648240 systemd[1]: Mounted tmp.mount. Sep 10 00:46:09.649484 systemd[1]: Finished flatcar-tmpfiles.service. Sep 10 00:46:09.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.650667 systemd[1]: Finished kmod-static-nodes.service. Sep 10 00:46:09.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.655234 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:46:09.655432 systemd[1]: Finished modprobe@configfs.service. Sep 10 00:46:09.658452 kernel: audit: type=1130 audit(1757465169.650:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.658491 kernel: audit: type=1130 audit(1757465169.654:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.659569 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:46:09.659763 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:46:09.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.706606 kernel: audit: type=1130 audit(1757465169.659:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.706730 kernel: audit: type=1131 audit(1757465169.659:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.707972 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:46:09.708288 systemd[1]: Finished modprobe@drm.service. Sep 10 00:46:09.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.709297 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:46:09.709433 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:46:09.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.710457 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:46:09.710593 systemd[1]: Finished modprobe@fuse.service. Sep 10 00:46:09.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.711553 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:46:09.711719 systemd[1]: Finished modprobe@loop.service. Sep 10 00:46:09.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.712761 systemd[1]: Finished systemd-modules-load.service. Sep 10 00:46:09.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.713860 systemd[1]: Finished systemd-network-generator.service. Sep 10 00:46:09.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.714979 systemd[1]: Finished systemd-remount-fs.service. Sep 10 00:46:09.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.716056 systemd[1]: Finished systemd-udev-trigger.service. Sep 10 00:46:09.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.717412 systemd[1]: Reached target network-pre.target. Sep 10 00:46:09.719291 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 10 00:46:09.720979 systemd[1]: Mounting sys-kernel-config.mount... Sep 10 00:46:09.721702 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:46:09.723348 systemd[1]: Starting systemd-hwdb-update.service... Sep 10 00:46:09.725280 systemd[1]: Starting systemd-journal-flush.service... Sep 10 00:46:09.726184 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:46:09.727162 systemd[1]: Starting systemd-random-seed.service... Sep 10 00:46:09.728083 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:46:09.729048 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:46:09.732580 systemd[1]: Starting systemd-sysusers.service... Sep 10 00:46:09.735283 systemd-journald[1013]: Time spent on flushing to /var/log/journal/69e2fddfd45342228e0f7d258dece086 is 12.909ms for 1060 entries. Sep 10 00:46:09.735283 systemd-journald[1013]: System Journal (/var/log/journal/69e2fddfd45342228e0f7d258dece086) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:46:10.000929 systemd-journald[1013]: Received client request to flush runtime journal. Sep 10 00:46:09.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:09.734814 systemd[1]: Starting systemd-udev-settle.service... Sep 10 00:46:09.739407 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 10 00:46:10.001487 udevadm[1054]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 10 00:46:09.740381 systemd[1]: Mounted sys-kernel-config.mount. Sep 10 00:46:09.829123 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:46:09.831528 systemd[1]: Finished systemd-sysusers.service. Sep 10 00:46:09.834313 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 10 00:46:09.872487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 10 00:46:09.883700 systemd[1]: Finished systemd-random-seed.service. Sep 10 00:46:09.884593 systemd[1]: Reached target first-boot-complete.target. Sep 10 00:46:10.002039 systemd[1]: Finished systemd-journal-flush.service. Sep 10 00:46:10.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:10.513573 systemd[1]: Finished systemd-hwdb-update.service. Sep 10 00:46:10.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:10.516427 systemd[1]: Starting systemd-udevd.service... Sep 10 00:46:10.533906 systemd-udevd[1065]: Using default interface naming scheme 'v252'. Sep 10 00:46:10.546266 systemd[1]: Started systemd-udevd.service. Sep 10 00:46:10.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:10.549597 systemd[1]: Starting systemd-networkd.service... Sep 10 00:46:10.564169 systemd[1]: Starting systemd-userdbd.service... Sep 10 00:46:10.581405 systemd[1]: Found device dev-ttyS0.device. Sep 10 00:46:10.620707 systemd[1]: Started systemd-userdbd.service. Sep 10 00:46:10.622638 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 10 00:46:10.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:10.636108 kernel: ACPI: button: Power Button [PWRF] Sep 10 00:46:10.641897 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 10 00:46:10.653000 audit[1083]: AVC avc: denied { confidentiality } for pid=1083 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 10 00:46:10.675028 systemd-networkd[1076]: lo: Link UP Sep 10 00:46:10.675038 systemd-networkd[1076]: lo: Gained carrier Sep 10 00:46:10.675464 systemd-networkd[1076]: Enumeration completed Sep 10 00:46:10.675566 systemd[1]: Started systemd-networkd.service. Sep 10 00:46:10.675809 systemd-networkd[1076]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:46:10.676893 systemd-networkd[1076]: eth0: Link UP Sep 10 00:46:10.676900 systemd-networkd[1076]: eth0: Gained carrier Sep 10 00:46:10.653000 audit[1083]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55dbe9a93c10 a1=338ec a2=7f6c89b42bc5 a3=5 items=110 ppid=1065 pid=1083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:46:10.653000 audit: CWD cwd="/" Sep 10 00:46:10.653000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=1 name=(null) inode=14609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=2 name=(null) inode=14609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=3 name=(null) inode=14610 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=4 name=(null) inode=14609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=5 name=(null) inode=14611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=6 name=(null) inode=14609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=7 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=8 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=9 name=(null) inode=14613 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=10 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=11 name=(null) inode=14614 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=12 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=13 name=(null) inode=14615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=14 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=15 name=(null) inode=14616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=16 name=(null) inode=14612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=17 name=(null) inode=14617 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=18 name=(null) inode=14609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=19 name=(null) inode=14618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=20 name=(null) inode=14618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=21 name=(null) inode=14619 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=22 name=(null) inode=14618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=23 name=(null) inode=14620 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=24 name=(null) inode=14618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=25 name=(null) inode=14621 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=26 name=(null) inode=14618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=27 name=(null) inode=14622 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=28 name=(null) inode=14618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=29 name=(null) inode=14623 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=30 name=(null) inode=14609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=31 name=(null) inode=14624 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=32 name=(null) inode=14624 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=33 name=(null) inode=14625 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=34 name=(null) inode=14624 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=35 name=(null) inode=14626 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=36 name=(null) inode=14624 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=37 name=(null) inode=14627 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=38 name=(null) inode=14624 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=39 name=(null) inode=14628 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=40 name=(null) inode=14624 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=41 name=(null) inode=14629 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=42 name=(null) inode=14609 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=43 name=(null) inode=14630 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=44 name=(null) inode=14630 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=45 name=(null) inode=14631 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=46 name=(null) inode=14630 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=47 name=(null) inode=14632 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=48 name=(null) inode=14630 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=49 name=(null) inode=14633 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=50 name=(null) inode=14630 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=51 name=(null) inode=14634 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=52 name=(null) inode=14630 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=53 name=(null) inode=14635 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=55 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=56 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=57 name=(null) inode=14637 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=58 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=59 name=(null) inode=14638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=60 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=61 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=62 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=63 name=(null) inode=14640 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=64 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=65 name=(null) inode=14641 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=66 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=67 name=(null) inode=14642 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=68 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=69 name=(null) inode=14643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=70 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=71 name=(null) inode=14644 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=72 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=73 name=(null) inode=14645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=74 name=(null) inode=14645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=75 name=(null) inode=14646 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=76 name=(null) inode=14645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=77 name=(null) inode=14647 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=78 name=(null) inode=14645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=79 name=(null) inode=14648 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=80 name=(null) inode=14645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=81 name=(null) inode=14649 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=82 name=(null) inode=14645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=83 name=(null) inode=14650 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=84 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=85 name=(null) inode=14651 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=86 name=(null) inode=14651 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=87 name=(null) inode=14652 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=88 name=(null) inode=14651 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=89 name=(null) inode=14653 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=90 name=(null) inode=14651 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=91 name=(null) inode=14654 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=92 name=(null) inode=14651 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=93 name=(null) inode=14655 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=94 name=(null) inode=14651 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=95 name=(null) inode=14656 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=96 name=(null) inode=14636 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=97 name=(null) inode=14657 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=98 name=(null) inode=14657 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=99 name=(null) inode=14658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=100 name=(null) inode=14657 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=101 name=(null) inode=14659 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=102 name=(null) inode=14657 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=103 name=(null) inode=14660 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=104 name=(null) inode=14657 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=105 name=(null) inode=14661 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=106 name=(null) inode=14657 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=107 name=(null) inode=14662 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PATH item=109 name=(null) inode=12245 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:46:10.653000 audit: PROCTITLE proctitle="(udev-worker)" Sep 10 00:46:10.690432 systemd-networkd[1076]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:46:10.692098 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 00:46:10.692390 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 10 00:46:10.692544 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 00:46:10.692655 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 10 00:46:10.696147 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 00:46:10.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:10.811542 kernel: kvm: Nested Virtualization enabled Sep 10 00:46:10.811699 kernel: SVM: kvm: Nested Paging enabled Sep 10 00:46:10.811717 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 10 00:46:10.812212 kernel: SVM: Virtual GIF supported Sep 10 00:46:10.829114 kernel: EDAC MC: Ver: 3.0.0 Sep 10 00:46:10.854565 systemd[1]: Finished systemd-udev-settle.service. Sep 10 00:46:10.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:10.857170 systemd[1]: Starting lvm2-activation-early.service... Sep 10 00:46:10.865132 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:46:10.902110 systemd[1]: Finished lvm2-activation-early.service. Sep 10 00:46:10.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:10.903254 systemd[1]: Reached target cryptsetup.target. Sep 10 00:46:10.905514 systemd[1]: Starting lvm2-activation.service... Sep 10 00:46:10.911903 lvm[1103]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:46:10.940943 systemd[1]: Finished lvm2-activation.service. Sep 10 00:46:10.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:10.942104 systemd[1]: Reached target local-fs-pre.target. Sep 10 00:46:10.942979 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:46:10.943002 systemd[1]: Reached target local-fs.target. Sep 10 00:46:10.943845 systemd[1]: Reached target machines.target. Sep 10 00:46:10.945899 systemd[1]: Starting ldconfig.service... Sep 10 00:46:10.947080 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:46:10.947115 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:46:10.948283 systemd[1]: Starting systemd-boot-update.service... Sep 10 00:46:10.950271 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 10 00:46:10.952417 systemd[1]: Starting systemd-machine-id-commit.service... Sep 10 00:46:10.955010 systemd[1]: Starting systemd-sysext.service... Sep 10 00:46:10.956524 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1106 (bootctl) Sep 10 00:46:10.957626 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 10 00:46:10.960231 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 10 00:46:10.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:10.967409 systemd[1]: Unmounting usr-share-oem.mount... Sep 10 00:46:10.972798 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 10 00:46:10.973040 systemd[1]: Unmounted usr-share-oem.mount. Sep 10 00:46:10.984088 kernel: loop0: detected capacity change from 0 to 221472 Sep 10 00:46:10.997675 systemd-fsck[1116]: fsck.fat 4.2 (2021-01-31) Sep 10 00:46:10.997675 systemd-fsck[1116]: /dev/vda1: 790 files, 120765/258078 clusters Sep 10 00:46:10.999536 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 10 00:46:11.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.003160 systemd[1]: Mounting boot.mount... Sep 10 00:46:11.029854 systemd[1]: Mounted boot.mount. Sep 10 00:46:11.043269 systemd[1]: Finished systemd-boot-update.service. Sep 10 00:46:11.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.594988 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:46:11.595908 systemd[1]: Finished systemd-machine-id-commit.service. Sep 10 00:46:11.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.599097 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:46:11.614113 kernel: loop1: detected capacity change from 0 to 221472 Sep 10 00:46:11.623313 (sd-sysext)[1127]: Using extensions 'kubernetes'. Sep 10 00:46:11.624906 (sd-sysext)[1127]: Merged extensions into '/usr'. Sep 10 00:46:11.644102 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:46:11.645911 systemd[1]: Mounting usr-share-oem.mount... Sep 10 00:46:11.647025 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:46:11.648488 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:46:11.650885 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:46:11.654697 systemd[1]: Starting modprobe@loop.service... Sep 10 00:46:11.656013 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:46:11.656173 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:46:11.656298 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:46:11.661490 systemd[1]: Mounted usr-share-oem.mount. Sep 10 00:46:11.662778 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:46:11.662945 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:46:11.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.664153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:46:11.664311 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:46:11.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.665534 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:46:11.665696 systemd[1]: Finished modprobe@loop.service. Sep 10 00:46:11.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.666817 ldconfig[1105]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:46:11.666996 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:46:11.667220 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:46:11.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.668137 systemd[1]: Finished systemd-sysext.service. Sep 10 00:46:11.670630 systemd[1]: Starting ensure-sysext.service... Sep 10 00:46:11.672550 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 10 00:46:11.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.673909 systemd[1]: Finished ldconfig.service. Sep 10 00:46:11.677305 systemd[1]: Reloading. Sep 10 00:46:11.684582 systemd-tmpfiles[1142]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 10 00:46:11.685754 systemd-tmpfiles[1142]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:46:11.687404 systemd-tmpfiles[1142]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:46:11.730351 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2025-09-10T00:46:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:46:11.730383 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2025-09-10T00:46:11Z" level=info msg="torcx already run" Sep 10 00:46:11.815614 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:46:11.815630 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:46:11.837370 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:46:11.893846 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 10 00:46:11.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.899264 systemd[1]: Starting audit-rules.service... Sep 10 00:46:11.901869 systemd[1]: Starting clean-ca-certificates.service... Sep 10 00:46:11.904489 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 10 00:46:11.907831 systemd[1]: Starting systemd-resolved.service... Sep 10 00:46:11.910762 systemd[1]: Starting systemd-timesyncd.service... Sep 10 00:46:11.913266 systemd[1]: Starting systemd-update-utmp.service... Sep 10 00:46:11.915621 systemd[1]: Finished clean-ca-certificates.service. Sep 10 00:46:11.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.920491 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:46:11.921000 audit[1225]: SYSTEM_BOOT pid=1225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.925531 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:46:11.926271 systemd-networkd[1076]: eth0: Gained IPv6LL Sep 10 00:46:11.927436 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:46:11.931403 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:46:11.934867 systemd[1]: Starting modprobe@loop.service... Sep 10 00:46:11.935747 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:46:11.936748 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:46:11.937984 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:46:11.939721 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:46:11.939926 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:46:11.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.942270 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 10 00:46:11.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.943862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:46:11.944289 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:46:11.945932 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:46:11.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.946231 systemd[1]: Finished modprobe@loop.service. Sep 10 00:46:11.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.949033 systemd[1]: Finished systemd-update-utmp.service. Sep 10 00:46:11.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.954645 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:46:11.956100 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:46:11.959759 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:46:11.961678 systemd[1]: Starting modprobe@loop.service... Sep 10 00:46:11.963450 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:46:11.963669 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:46:11.965926 systemd[1]: Starting systemd-update-done.service... Sep 10 00:46:11.968424 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:46:11.969525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:46:11.969732 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:46:11.970948 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:46:11.971101 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:46:11.974496 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:46:11.974668 systemd[1]: Finished modprobe@loop.service. Sep 10 00:46:11.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.976019 systemd[1]: Finished systemd-update-done.service. Sep 10 00:46:11.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.977658 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:46:11.977729 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:46:11.977810 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:46:11.977871 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:46:11.980909 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:46:11.981670 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:46:11.982976 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:46:11.984814 systemd[1]: Starting modprobe@drm.service... Sep 10 00:46:11.987220 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:46:11.989271 systemd[1]: Starting modprobe@loop.service... Sep 10 00:46:11.990171 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:46:11.990307 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:46:11.992038 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 10 00:46:11.995139 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:46:11.995288 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:46:11.998256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:46:12.000504 augenrules[1254]: No rules Sep 10 00:46:11.998419 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:46:12.000320 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:46:12.000460 systemd[1]: Finished modprobe@drm.service. Sep 10 00:46:11.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:11.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:46:12.000000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 10 00:46:12.000000 audit[1254]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe592bdcc0 a2=420 a3=0 items=0 ppid=1213 pid=1254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:46:12.000000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 10 00:46:12.002079 systemd[1]: Finished audit-rules.service. Sep 10 00:46:12.003517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:46:12.003677 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:46:12.005329 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:46:12.005625 systemd[1]: Finished modprobe@loop.service. Sep 10 00:46:12.007224 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 10 00:46:12.009236 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:46:12.009328 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:46:12.010980 systemd[1]: Finished ensure-sysext.service. Sep 10 00:46:12.021354 systemd-resolved[1222]: Positive Trust Anchors: Sep 10 00:46:12.021369 systemd-resolved[1222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:46:12.021396 systemd-resolved[1222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 10 00:46:12.025378 systemd[1]: Started systemd-timesyncd.service. Sep 10 00:46:13.054872 systemd-timesyncd[1224]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:46:13.054936 systemd-timesyncd[1224]: Initial clock synchronization to Wed 2025-09-10 00:46:13.054787 UTC. Sep 10 00:46:13.055772 systemd[1]: Reached target time-set.target. Sep 10 00:46:13.058178 systemd-resolved[1222]: Defaulting to hostname 'linux'. Sep 10 00:46:13.059637 systemd[1]: Started systemd-resolved.service. Sep 10 00:46:13.060619 systemd[1]: Reached target network.target. Sep 10 00:46:13.061389 systemd[1]: Reached target network-online.target. Sep 10 00:46:13.062441 systemd[1]: Reached target nss-lookup.target. Sep 10 00:46:13.063335 systemd[1]: Reached target sysinit.target. Sep 10 00:46:13.064216 systemd[1]: Started motdgen.path. Sep 10 00:46:13.065032 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 10 00:46:13.066350 systemd[1]: Started logrotate.timer. Sep 10 00:46:13.067155 systemd[1]: Started mdadm.timer. Sep 10 00:46:13.067929 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 10 00:46:13.068787 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:46:13.068805 systemd[1]: Reached target paths.target. Sep 10 00:46:13.069551 systemd[1]: Reached target timers.target. Sep 10 00:46:13.070569 systemd[1]: Listening on dbus.socket. Sep 10 00:46:13.072668 systemd[1]: Starting docker.socket... Sep 10 00:46:13.075036 systemd[1]: Listening on sshd.socket. Sep 10 00:46:13.076008 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:46:13.076283 systemd[1]: Listening on docker.socket. Sep 10 00:46:13.077141 systemd[1]: Reached target sockets.target. Sep 10 00:46:13.077913 systemd[1]: Reached target basic.target. Sep 10 00:46:13.078757 systemd[1]: System is tainted: cgroupsv1 Sep 10 00:46:13.078798 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 10 00:46:13.078818 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 10 00:46:13.079729 systemd[1]: Starting containerd.service... Sep 10 00:46:13.081317 systemd[1]: Starting dbus.service... Sep 10 00:46:13.083225 systemd[1]: Starting enable-oem-cloudinit.service... Sep 10 00:46:13.085068 systemd[1]: Starting extend-filesystems.service... Sep 10 00:46:13.086029 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 10 00:46:13.087127 systemd[1]: Starting kubelet.service... Sep 10 00:46:13.089202 systemd[1]: Starting motdgen.service... Sep 10 00:46:13.091047 systemd[1]: Starting prepare-helm.service... Sep 10 00:46:13.093938 jq[1279]: false Sep 10 00:46:13.095002 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 10 00:46:13.099137 systemd[1]: Starting sshd-keygen.service... Sep 10 00:46:13.103348 systemd[1]: Starting systemd-logind.service... Sep 10 00:46:13.130223 extend-filesystems[1280]: Found loop1 Sep 10 00:46:13.130223 extend-filesystems[1280]: Found sr0 Sep 10 00:46:13.130223 extend-filesystems[1280]: Found vda Sep 10 00:46:13.130223 extend-filesystems[1280]: Found vda1 Sep 10 00:46:13.130223 extend-filesystems[1280]: Found vda2 Sep 10 00:46:13.130223 extend-filesystems[1280]: Found vda3 Sep 10 00:46:13.130223 extend-filesystems[1280]: Found usr Sep 10 00:46:13.130223 extend-filesystems[1280]: Found vda4 Sep 10 00:46:13.130223 extend-filesystems[1280]: Found vda6 Sep 10 00:46:13.130223 extend-filesystems[1280]: Found vda7 Sep 10 00:46:13.130223 extend-filesystems[1280]: Found vda9 Sep 10 00:46:13.130223 extend-filesystems[1280]: Checking size of /dev/vda9 Sep 10 00:46:13.176625 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:46:13.121086 dbus-daemon[1278]: [system] SELinux support is enabled Sep 10 00:46:13.104308 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:46:13.177354 extend-filesystems[1280]: Resized partition /dev/vda9 Sep 10 00:46:13.104462 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:46:13.178412 extend-filesystems[1313]: resize2fs 1.46.5 (30-Dec-2021) Sep 10 00:46:13.179571 jq[1306]: true Sep 10 00:46:13.106561 systemd[1]: Starting update-engine.service... Sep 10 00:46:13.133320 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 10 00:46:13.139050 systemd[1]: Started dbus.service. Sep 10 00:46:13.148005 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:46:13.148447 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 10 00:46:13.149909 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:46:13.150256 systemd[1]: Finished motdgen.service. Sep 10 00:46:13.152339 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:46:13.152881 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 10 00:46:13.161244 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:46:13.161298 systemd[1]: Reached target system-config.target. Sep 10 00:46:13.165283 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:46:13.165302 systemd[1]: Reached target user-config.target. Sep 10 00:46:13.258366 env[1317]: time="2025-09-10T00:46:13.258314105Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 10 00:46:13.273664 env[1317]: time="2025-09-10T00:46:13.273635883Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:46:13.273878 env[1317]: time="2025-09-10T00:46:13.273857078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:46:13.275643 env[1317]: time="2025-09-10T00:46:13.275614824Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.191-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:46:13.275643 env[1317]: time="2025-09-10T00:46:13.275641393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:46:13.275858 env[1317]: time="2025-09-10T00:46:13.275834125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:46:13.275858 env[1317]: time="2025-09-10T00:46:13.275853531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:46:13.275956 env[1317]: time="2025-09-10T00:46:13.275865043Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 10 00:46:13.275956 env[1317]: time="2025-09-10T00:46:13.275873388Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:46:13.275995 env[1317]: time="2025-09-10T00:46:13.275960732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:46:13.276215 env[1317]: time="2025-09-10T00:46:13.276194480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:46:13.276361 env[1317]: time="2025-09-10T00:46:13.276338480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:46:13.276361 env[1317]: time="2025-09-10T00:46:13.276356574Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:46:13.276441 env[1317]: time="2025-09-10T00:46:13.276419893Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 10 00:46:13.276441 env[1317]: time="2025-09-10T00:46:13.276430914Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:46:13.704708 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:46:13.704810 jq[1315]: true Sep 10 00:46:13.704956 tar[1314]: linux-amd64/helm Sep 10 00:46:13.705199 extend-filesystems[1313]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:46:13.705199 extend-filesystems[1313]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:46:13.705199 extend-filesystems[1313]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:46:13.729612 update_engine[1298]: I0910 00:46:13.721132 1298 main.cc:92] Flatcar Update Engine starting Sep 10 00:46:13.729612 update_engine[1298]: I0910 00:46:13.729453 1298 update_check_scheduler.cc:74] Next update check in 4m12s Sep 10 00:46:13.707165 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:46:13.730277 extend-filesystems[1280]: Resized filesystem in /dev/vda9 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.715444534Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.715535334Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.715581261Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.715654889Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.715675397Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.715696366Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.715709741Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.715791635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.715814758Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.715845536Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.715865213Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.715885711Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.716059848Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:46:13.736425 env[1317]: time="2025-09-10T00:46:13.716199239Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:46:13.707485 systemd[1]: Finished extend-filesystems.service. Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.716811688Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.716870328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.716888652Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.717028815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.717051307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.717073579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.717090440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.717108504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.717131257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.717148118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.717165210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.717187743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.717463500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.717485160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:46:13.736944 env[1317]: time="2025-09-10T00:46:13.717503575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:46:13.719799 systemd[1]: Started containerd.service. Sep 10 00:46:13.737292 env[1317]: time="2025-09-10T00:46:13.717520897Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:46:13.737292 env[1317]: time="2025-09-10T00:46:13.717550553Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 10 00:46:13.737292 env[1317]: time="2025-09-10T00:46:13.717568777Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:46:13.737292 env[1317]: time="2025-09-10T00:46:13.717596639Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 10 00:46:13.737292 env[1317]: time="2025-09-10T00:46:13.717674535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:46:13.727120 systemd-logind[1293]: Watching system buttons on /dev/input/event1 (Power Button) Sep 10 00:46:13.727141 systemd-logind[1293]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 00:46:13.737651 env[1317]: time="2025-09-10T00:46:13.718067412Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:46:13.737651 env[1317]: time="2025-09-10T00:46:13.718155547Z" level=info msg="Connect containerd service" Sep 10 00:46:13.737651 env[1317]: time="2025-09-10T00:46:13.718209989Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:46:13.737651 env[1317]: time="2025-09-10T00:46:13.718995262Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:46:13.737651 env[1317]: time="2025-09-10T00:46:13.719425188Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:46:13.737651 env[1317]: time="2025-09-10T00:46:13.719489138Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:46:13.737651 env[1317]: time="2025-09-10T00:46:13.719566172Z" level=info msg="containerd successfully booted in 0.462162s" Sep 10 00:46:13.737651 env[1317]: time="2025-09-10T00:46:13.720366112Z" level=info msg="Start subscribing containerd event" Sep 10 00:46:13.737651 env[1317]: time="2025-09-10T00:46:13.724376281Z" level=info msg="Start recovering state" Sep 10 00:46:13.737651 env[1317]: time="2025-09-10T00:46:13.724462182Z" level=info msg="Start event monitor" Sep 10 00:46:13.737651 env[1317]: time="2025-09-10T00:46:13.724482090Z" level=info msg="Start snapshots syncer" Sep 10 00:46:13.737651 env[1317]: time="2025-09-10T00:46:13.724499462Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:46:13.737651 env[1317]: time="2025-09-10T00:46:13.724521954Z" level=info msg="Start streaming server" Sep 10 00:46:13.728298 systemd-logind[1293]: New seat seat0. Sep 10 00:46:13.729250 systemd[1]: Started update-engine.service. Sep 10 00:46:13.736743 systemd[1]: Started locksmithd.service. Sep 10 00:46:13.740877 systemd[1]: Started systemd-logind.service. Sep 10 00:46:13.750078 bash[1344]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:46:13.750511 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 10 00:46:13.854791 locksmithd[1348]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:46:14.521169 sshd_keygen[1299]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:46:14.540146 tar[1314]: linux-amd64/LICENSE Sep 10 00:46:14.540311 tar[1314]: linux-amd64/README.md Sep 10 00:46:14.542976 systemd[1]: Finished sshd-keygen.service. Sep 10 00:46:14.545288 systemd[1]: Starting issuegen.service... Sep 10 00:46:14.546873 systemd[1]: Finished prepare-helm.service. Sep 10 00:46:14.551038 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:46:14.551306 systemd[1]: Finished issuegen.service. Sep 10 00:46:14.553785 systemd[1]: Starting systemd-user-sessions.service... Sep 10 00:46:14.559027 systemd[1]: Finished systemd-user-sessions.service. Sep 10 00:46:14.561145 systemd[1]: Started getty@tty1.service. Sep 10 00:46:14.563187 systemd[1]: Started serial-getty@ttyS0.service. Sep 10 00:46:14.564243 systemd[1]: Reached target getty.target. Sep 10 00:46:15.349266 systemd[1]: Started kubelet.service. Sep 10 00:46:15.351276 systemd[1]: Reached target multi-user.target. Sep 10 00:46:15.353981 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 10 00:46:15.362806 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 10 00:46:15.363066 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 10 00:46:15.365796 systemd[1]: Startup finished in 7.951s (kernel) + 10.338s (userspace) = 18.289s. Sep 10 00:46:15.958655 kubelet[1379]: E0910 00:46:15.958601 1379 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:46:15.960698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:46:15.960851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:46:22.073876 systemd[1]: Created slice system-sshd.slice. Sep 10 00:46:22.075296 systemd[1]: Started sshd@0-10.0.0.93:22-10.0.0.1:40904.service. Sep 10 00:46:22.116969 sshd[1389]: Accepted publickey for core from 10.0.0.1 port 40904 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:46:22.119006 sshd[1389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:46:22.130682 systemd-logind[1293]: New session 1 of user core. Sep 10 00:46:22.131984 systemd[1]: Created slice user-500.slice. Sep 10 00:46:22.133269 systemd[1]: Starting user-runtime-dir@500.service... Sep 10 00:46:22.143385 systemd[1]: Finished user-runtime-dir@500.service. Sep 10 00:46:22.145085 systemd[1]: Starting user@500.service... Sep 10 00:46:22.148459 (systemd)[1394]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:46:22.220312 systemd[1394]: Queued start job for default target default.target. Sep 10 00:46:22.220544 systemd[1394]: Reached target paths.target. Sep 10 00:46:22.220560 systemd[1394]: Reached target sockets.target. Sep 10 00:46:22.220572 systemd[1394]: Reached target timers.target. Sep 10 00:46:22.220583 systemd[1394]: Reached target basic.target. Sep 10 00:46:22.220621 systemd[1394]: Reached target default.target. Sep 10 00:46:22.220644 systemd[1394]: Startup finished in 66ms. Sep 10 00:46:22.220757 systemd[1]: Started user@500.service. Sep 10 00:46:22.221867 systemd[1]: Started session-1.scope. Sep 10 00:46:22.273480 systemd[1]: Started sshd@1-10.0.0.93:22-10.0.0.1:40910.service. Sep 10 00:46:22.316869 sshd[1403]: Accepted publickey for core from 10.0.0.1 port 40910 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:46:22.318594 sshd[1403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:46:22.323125 systemd-logind[1293]: New session 2 of user core. Sep 10 00:46:22.324152 systemd[1]: Started session-2.scope. Sep 10 00:46:22.380557 sshd[1403]: pam_unix(sshd:session): session closed for user core Sep 10 00:46:22.383045 systemd[1]: Started sshd@2-10.0.0.93:22-10.0.0.1:40914.service. Sep 10 00:46:22.383491 systemd[1]: sshd@1-10.0.0.93:22-10.0.0.1:40910.service: Deactivated successfully. Sep 10 00:46:22.384357 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:46:22.384386 systemd-logind[1293]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:46:22.385207 systemd-logind[1293]: Removed session 2. Sep 10 00:46:22.422314 sshd[1408]: Accepted publickey for core from 10.0.0.1 port 40914 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:46:22.423547 sshd[1408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:46:22.426816 systemd-logind[1293]: New session 3 of user core. Sep 10 00:46:22.427545 systemd[1]: Started session-3.scope. Sep 10 00:46:22.477141 sshd[1408]: pam_unix(sshd:session): session closed for user core Sep 10 00:46:22.480315 systemd[1]: Started sshd@3-10.0.0.93:22-10.0.0.1:40924.service. Sep 10 00:46:22.480863 systemd[1]: sshd@2-10.0.0.93:22-10.0.0.1:40914.service: Deactivated successfully. Sep 10 00:46:22.482435 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:46:22.482479 systemd-logind[1293]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:46:22.483626 systemd-logind[1293]: Removed session 3. Sep 10 00:46:22.516968 sshd[1416]: Accepted publickey for core from 10.0.0.1 port 40924 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:46:22.518072 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:46:22.521432 systemd-logind[1293]: New session 4 of user core. Sep 10 00:46:22.522100 systemd[1]: Started session-4.scope. Sep 10 00:46:22.575640 sshd[1416]: pam_unix(sshd:session): session closed for user core Sep 10 00:46:22.577790 systemd[1]: Started sshd@4-10.0.0.93:22-10.0.0.1:40940.service. Sep 10 00:46:22.578541 systemd[1]: sshd@3-10.0.0.93:22-10.0.0.1:40924.service: Deactivated successfully. Sep 10 00:46:22.579327 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:46:22.579446 systemd-logind[1293]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:46:22.580152 systemd-logind[1293]: Removed session 4. Sep 10 00:46:22.615907 sshd[1422]: Accepted publickey for core from 10.0.0.1 port 40940 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:46:22.617050 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:46:22.620424 systemd-logind[1293]: New session 5 of user core. Sep 10 00:46:22.621183 systemd[1]: Started session-5.scope. Sep 10 00:46:22.675683 sudo[1428]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:46:22.675906 sudo[1428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 10 00:46:22.701031 systemd[1]: Starting docker.service... Sep 10 00:46:22.747232 env[1440]: time="2025-09-10T00:46:22.747141884Z" level=info msg="Starting up" Sep 10 00:46:22.748777 env[1440]: time="2025-09-10T00:46:22.748727828Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 10 00:46:22.748777 env[1440]: time="2025-09-10T00:46:22.748754919Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 10 00:46:22.748777 env[1440]: time="2025-09-10T00:46:22.748778112Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 10 00:46:22.748777 env[1440]: time="2025-09-10T00:46:22.748791367Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 10 00:46:22.751276 env[1440]: time="2025-09-10T00:46:22.751238385Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 10 00:46:22.751276 env[1440]: time="2025-09-10T00:46:22.751257361Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 10 00:46:22.751276 env[1440]: time="2025-09-10T00:46:22.751268762Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 10 00:46:22.751276 env[1440]: time="2025-09-10T00:46:22.751276867Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 10 00:46:23.440386 env[1440]: time="2025-09-10T00:46:23.440329090Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 10 00:46:23.440386 env[1440]: time="2025-09-10T00:46:23.440362393Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 10 00:46:23.440933 env[1440]: time="2025-09-10T00:46:23.440904900Z" level=info msg="Loading containers: start." Sep 10 00:46:23.550926 kernel: Initializing XFRM netlink socket Sep 10 00:46:23.577463 env[1440]: time="2025-09-10T00:46:23.577418944Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 10 00:46:23.624788 systemd-networkd[1076]: docker0: Link UP Sep 10 00:46:23.641939 env[1440]: time="2025-09-10T00:46:23.641884016Z" level=info msg="Loading containers: done." Sep 10 00:46:23.653748 env[1440]: time="2025-09-10T00:46:23.653700832Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:46:23.653930 env[1440]: time="2025-09-10T00:46:23.653869609Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 10 00:46:23.653994 env[1440]: time="2025-09-10T00:46:23.653968053Z" level=info msg="Daemon has completed initialization" Sep 10 00:46:23.670061 systemd[1]: Started docker.service. Sep 10 00:46:23.673595 env[1440]: time="2025-09-10T00:46:23.673548236Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:46:24.326380 env[1317]: time="2025-09-10T00:46:24.326317538Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 10 00:46:24.990134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount237180120.mount: Deactivated successfully. Sep 10 00:46:26.212228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:46:26.212789 systemd[1]: Stopped kubelet.service. Sep 10 00:46:26.215264 systemd[1]: Starting kubelet.service... Sep 10 00:46:26.388412 systemd[1]: Started kubelet.service. Sep 10 00:46:26.425855 kubelet[1580]: E0910 00:46:26.425788 1580 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:46:26.428746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:46:26.428915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:46:27.061068 env[1317]: time="2025-09-10T00:46:27.060981342Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:27.062982 env[1317]: time="2025-09-10T00:46:27.062935015Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:27.064934 env[1317]: time="2025-09-10T00:46:27.064870504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:27.066712 env[1317]: time="2025-09-10T00:46:27.066655511Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:27.067526 env[1317]: time="2025-09-10T00:46:27.067485558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 10 00:46:27.068296 env[1317]: time="2025-09-10T00:46:27.068252987Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 10 00:46:30.858108 env[1317]: time="2025-09-10T00:46:30.857993068Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:30.859935 env[1317]: time="2025-09-10T00:46:30.859901827Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:30.861630 env[1317]: time="2025-09-10T00:46:30.861597466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:30.863267 env[1317]: time="2025-09-10T00:46:30.863244575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:30.863830 env[1317]: time="2025-09-10T00:46:30.863772224Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 10 00:46:30.864566 env[1317]: time="2025-09-10T00:46:30.864546816Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 10 00:46:33.144227 env[1317]: time="2025-09-10T00:46:33.144144932Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:33.146567 env[1317]: time="2025-09-10T00:46:33.146497724Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:33.148517 env[1317]: time="2025-09-10T00:46:33.148492394Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:33.150281 env[1317]: time="2025-09-10T00:46:33.150195687Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:33.150811 env[1317]: time="2025-09-10T00:46:33.150769894Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 10 00:46:33.151328 env[1317]: time="2025-09-10T00:46:33.151302122Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 10 00:46:34.307297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3775266147.mount: Deactivated successfully. Sep 10 00:46:35.599024 env[1317]: time="2025-09-10T00:46:35.598903284Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:35.603171 env[1317]: time="2025-09-10T00:46:35.603076398Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:35.605331 env[1317]: time="2025-09-10T00:46:35.605275001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:35.606868 env[1317]: time="2025-09-10T00:46:35.606811131Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:35.607213 env[1317]: time="2025-09-10T00:46:35.607171647Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 10 00:46:35.607618 env[1317]: time="2025-09-10T00:46:35.607580784Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 00:46:36.353445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3799865227.mount: Deactivated successfully. Sep 10 00:46:36.679866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:46:36.680089 systemd[1]: Stopped kubelet.service. Sep 10 00:46:36.682319 systemd[1]: Starting kubelet.service... Sep 10 00:46:37.037710 systemd[1]: Started kubelet.service. Sep 10 00:46:37.091188 kubelet[1597]: E0910 00:46:37.091100 1597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:46:37.093691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:46:37.093858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:46:38.072813 env[1317]: time="2025-09-10T00:46:38.072701424Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:38.075195 env[1317]: time="2025-09-10T00:46:38.075151739Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:38.077264 env[1317]: time="2025-09-10T00:46:38.077214026Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:38.079372 env[1317]: time="2025-09-10T00:46:38.079327428Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:38.080246 env[1317]: time="2025-09-10T00:46:38.080208721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 10 00:46:38.080947 env[1317]: time="2025-09-10T00:46:38.080907421Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:46:38.671863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764722840.mount: Deactivated successfully. Sep 10 00:46:38.676499 env[1317]: time="2025-09-10T00:46:38.676443070Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:38.678974 env[1317]: time="2025-09-10T00:46:38.678934932Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:38.680528 env[1317]: time="2025-09-10T00:46:38.680497602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:38.681963 env[1317]: time="2025-09-10T00:46:38.681939536Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:38.682397 env[1317]: time="2025-09-10T00:46:38.682373309Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 00:46:38.682834 env[1317]: time="2025-09-10T00:46:38.682813985Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 00:46:39.466759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676229330.mount: Deactivated successfully. Sep 10 00:46:43.065376 env[1317]: time="2025-09-10T00:46:43.065272494Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:43.067691 env[1317]: time="2025-09-10T00:46:43.067633781Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:43.069523 env[1317]: time="2025-09-10T00:46:43.069479362Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:43.071475 env[1317]: time="2025-09-10T00:46:43.071437443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:43.072261 env[1317]: time="2025-09-10T00:46:43.072227815Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 10 00:46:44.895285 systemd[1]: Stopped kubelet.service. Sep 10 00:46:44.897532 systemd[1]: Starting kubelet.service... Sep 10 00:46:44.918881 systemd[1]: Reloading. Sep 10 00:46:44.988448 /usr/lib/systemd/system-generators/torcx-generator[1656]: time="2025-09-10T00:46:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:46:44.988873 /usr/lib/systemd/system-generators/torcx-generator[1656]: time="2025-09-10T00:46:44Z" level=info msg="torcx already run" Sep 10 00:46:45.279324 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:46:45.279344 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:46:45.299205 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:46:45.373555 systemd[1]: Started kubelet.service. Sep 10 00:46:45.375400 systemd[1]: Stopping kubelet.service... Sep 10 00:46:45.375818 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:46:45.376085 systemd[1]: Stopped kubelet.service. Sep 10 00:46:45.377755 systemd[1]: Starting kubelet.service... Sep 10 00:46:45.469274 systemd[1]: Started kubelet.service. Sep 10 00:46:45.510675 kubelet[1715]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:46:45.510675 kubelet[1715]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:46:45.510675 kubelet[1715]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:46:45.511153 kubelet[1715]: I0910 00:46:45.510752 1715 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:46:45.877961 kubelet[1715]: I0910 00:46:45.877915 1715 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:46:45.877961 kubelet[1715]: I0910 00:46:45.877949 1715 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:46:45.878198 kubelet[1715]: I0910 00:46:45.878180 1715 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:46:45.900697 kubelet[1715]: E0910 00:46:45.900659 1715 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:46:45.902968 kubelet[1715]: I0910 00:46:45.902942 1715 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:46:45.909140 kubelet[1715]: E0910 00:46:45.909116 1715 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:46:45.909140 kubelet[1715]: I0910 00:46:45.909138 1715 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:46:45.914068 kubelet[1715]: I0910 00:46:45.914046 1715 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:46:45.914759 kubelet[1715]: I0910 00:46:45.914737 1715 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:46:45.914879 kubelet[1715]: I0910 00:46:45.914848 1715 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:46:45.915080 kubelet[1715]: I0910 00:46:45.914876 1715 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 10 00:46:45.915217 kubelet[1715]: I0910 00:46:45.915092 1715 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:46:45.915217 kubelet[1715]: I0910 00:46:45.915101 1715 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:46:45.915267 kubelet[1715]: I0910 00:46:45.915216 1715 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:46:45.919531 kubelet[1715]: I0910 00:46:45.919511 1715 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:46:45.919597 kubelet[1715]: I0910 00:46:45.919538 1715 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:46:45.919597 kubelet[1715]: I0910 00:46:45.919586 1715 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:46:45.919656 kubelet[1715]: I0910 00:46:45.919607 1715 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:46:45.930440 kubelet[1715]: W0910 00:46:45.930364 1715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Sep 10 00:46:45.930440 kubelet[1715]: E0910 00:46:45.930447 1715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:46:45.931066 kubelet[1715]: W0910 00:46:45.930951 1715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Sep 10 00:46:45.931066 kubelet[1715]: E0910 00:46:45.930988 1715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:46:45.931303 kubelet[1715]: I0910 00:46:45.931275 1715 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 10 00:46:45.931857 kubelet[1715]: I0910 00:46:45.931830 1715 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:46:45.931956 kubelet[1715]: W0910 00:46:45.931945 1715 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:46:45.936370 kubelet[1715]: I0910 00:46:45.936346 1715 server.go:1274] "Started kubelet" Sep 10 00:46:45.936795 kubelet[1715]: I0910 00:46:45.936764 1715 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:46:45.937424 kubelet[1715]: I0910 00:46:45.937395 1715 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:46:45.937930 kubelet[1715]: I0910 00:46:45.937910 1715 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:46:45.938537 kubelet[1715]: I0910 00:46:45.938518 1715 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:46:45.941710 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 10 00:46:45.941811 kubelet[1715]: I0910 00:46:45.941794 1715 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:46:45.946631 kubelet[1715]: I0910 00:46:45.946595 1715 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:46:45.947634 kubelet[1715]: I0910 00:46:45.947615 1715 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:46:45.947686 kubelet[1715]: E0910 00:46:45.943757 1715 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.93:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.93:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c54b18f16828 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:46:45.936318504 +0000 UTC m=+0.462450064,LastTimestamp:2025-09-10 00:46:45.936318504 +0000 UTC m=+0.462450064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:46:45.947759 kubelet[1715]: I0910 00:46:45.947739 1715 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:46:45.948050 kubelet[1715]: I0910 00:46:45.947813 1715 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:46:45.948050 kubelet[1715]: E0910 00:46:45.947958 1715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:46:45.948191 kubelet[1715]: W0910 00:46:45.948154 1715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Sep 10 00:46:45.948240 kubelet[1715]: E0910 00:46:45.948203 1715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:46:45.948268 kubelet[1715]: I0910 00:46:45.948238 1715 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:46:45.948416 kubelet[1715]: I0910 00:46:45.948371 1715 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:46:45.948687 kubelet[1715]: E0910 00:46:45.948659 1715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="200ms" Sep 10 00:46:45.950037 kubelet[1715]: I0910 00:46:45.950018 1715 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:46:45.950222 kubelet[1715]: E0910 00:46:45.950191 1715 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:46:45.957508 kubelet[1715]: I0910 00:46:45.957477 1715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:46:45.958379 kubelet[1715]: I0910 00:46:45.958340 1715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:46:45.958379 kubelet[1715]: I0910 00:46:45.958375 1715 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:46:45.958531 kubelet[1715]: I0910 00:46:45.958405 1715 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:46:45.958531 kubelet[1715]: E0910 00:46:45.958473 1715 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:46:45.964620 kubelet[1715]: W0910 00:46:45.964521 1715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Sep 10 00:46:45.964785 kubelet[1715]: E0910 00:46:45.964741 1715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:46:45.965783 kubelet[1715]: I0910 00:46:45.965762 1715 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:46:45.965783 kubelet[1715]: I0910 00:46:45.965780 1715 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:46:45.965880 kubelet[1715]: I0910 00:46:45.965799 1715 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:46:46.049011 kubelet[1715]: E0910 00:46:46.048923 1715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:46:46.059074 kubelet[1715]: E0910 00:46:46.059017 1715 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:46:46.149472 kubelet[1715]: E0910 00:46:46.149342 1715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:46:46.149776 kubelet[1715]: E0910 00:46:46.149720 1715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="400ms" Sep 10 00:46:46.250107 kubelet[1715]: E0910 00:46:46.250063 1715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:46:46.259171 kubelet[1715]: E0910 00:46:46.259136 1715 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:46:46.350721 kubelet[1715]: E0910 00:46:46.350668 1715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:46:46.451791 kubelet[1715]: E0910 00:46:46.451642 1715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:46:46.550492 kubelet[1715]: E0910 00:46:46.550435 1715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="800ms" Sep 10 00:46:46.552434 kubelet[1715]: E0910 00:46:46.552418 1715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:46:46.652994 kubelet[1715]: E0910 00:46:46.652933 1715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:46:46.660169 kubelet[1715]: E0910 00:46:46.660122 1715 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:46:46.753782 kubelet[1715]: E0910 00:46:46.753646 1715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:46:46.854052 kubelet[1715]: E0910 00:46:46.854031 1715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:46:46.854253 kubelet[1715]: I0910 00:46:46.854187 1715 policy_none.go:49] "None policy: Start" Sep 10 00:46:46.855274 kubelet[1715]: I0910 00:46:46.855251 1715 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:46:46.855331 kubelet[1715]: I0910 00:46:46.855285 1715 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:46:46.954806 kubelet[1715]: E0910 00:46:46.954750 1715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:46:46.961797 kubelet[1715]: I0910 00:46:46.961755 1715 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:46:46.961996 kubelet[1715]: I0910 00:46:46.961970 1715 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:46:46.962045 kubelet[1715]: I0910 00:46:46.961999 1715 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:46:46.962493 kubelet[1715]: I0910 00:46:46.962464 1715 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:46:46.963311 kubelet[1715]: E0910 00:46:46.963295 1715 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:46:47.064211 kubelet[1715]: I0910 00:46:47.064041 1715 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:46:47.064657 kubelet[1715]: E0910 00:46:47.064626 1715 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Sep 10 00:46:47.128369 kubelet[1715]: W0910 00:46:47.128311 1715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Sep 10 00:46:47.128369 kubelet[1715]: E0910 00:46:47.128384 1715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:46:47.236917 kubelet[1715]: W0910 00:46:47.236802 1715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Sep 10 00:46:47.236917 kubelet[1715]: E0910 00:46:47.236867 1715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:46:47.266921 kubelet[1715]: I0910 00:46:47.266851 1715 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:46:47.267254 kubelet[1715]: E0910 00:46:47.267219 1715 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Sep 10 00:46:47.352047 kubelet[1715]: E0910 00:46:47.351949 1715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="1.6s" Sep 10 00:46:47.484466 kubelet[1715]: W0910 00:46:47.484391 1715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Sep 10 00:46:47.484466 kubelet[1715]: E0910 00:46:47.484454 1715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:46:47.521235 kubelet[1715]: W0910 00:46:47.521128 1715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Sep 10 00:46:47.521235 kubelet[1715]: E0910 00:46:47.521186 1715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:46:47.558785 kubelet[1715]: I0910 00:46:47.558707 1715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13de773fd8519b2cda02f00ffe9cf5fa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"13de773fd8519b2cda02f00ffe9cf5fa\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:46:47.558785 kubelet[1715]: I0910 00:46:47.558736 1715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13de773fd8519b2cda02f00ffe9cf5fa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"13de773fd8519b2cda02f00ffe9cf5fa\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:46:47.558785 kubelet[1715]: I0910 00:46:47.558752 1715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:46:47.558785 kubelet[1715]: I0910 00:46:47.558767 1715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:46:47.559324 kubelet[1715]: I0910 00:46:47.558812 1715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:46:47.559324 kubelet[1715]: I0910 00:46:47.558835 1715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13de773fd8519b2cda02f00ffe9cf5fa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"13de773fd8519b2cda02f00ffe9cf5fa\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:46:47.559324 kubelet[1715]: I0910 00:46:47.558848 1715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:46:47.559324 kubelet[1715]: I0910 00:46:47.558865 1715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:46:47.559324 kubelet[1715]: I0910 00:46:47.558879 1715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:46:47.669470 kubelet[1715]: I0910 00:46:47.669332 1715 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:46:47.669779 kubelet[1715]: E0910 00:46:47.669746 1715 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Sep 10 00:46:47.766415 kubelet[1715]: E0910 00:46:47.766371 1715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:47.766604 kubelet[1715]: E0910 00:46:47.766577 1715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:47.767305 env[1317]: time="2025-09-10T00:46:47.767257858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:13de773fd8519b2cda02f00ffe9cf5fa,Namespace:kube-system,Attempt:0,}" Sep 10 00:46:47.767581 env[1317]: time="2025-09-10T00:46:47.767257928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 10 00:46:47.768369 kubelet[1715]: E0910 00:46:47.768340 1715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:47.768812 env[1317]: time="2025-09-10T00:46:47.768772973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 10 00:46:47.949541 kubelet[1715]: E0910 00:46:47.949419 1715 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:46:48.471633 kubelet[1715]: I0910 00:46:48.471587 1715 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:46:48.471998 kubelet[1715]: E0910 00:46:48.471961 1715 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Sep 10 00:46:48.694084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128260401.mount: Deactivated successfully. Sep 10 00:46:48.700956 env[1317]: time="2025-09-10T00:46:48.700905129Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:48.701907 env[1317]: time="2025-09-10T00:46:48.701857138Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:48.703921 env[1317]: time="2025-09-10T00:46:48.703876658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:48.706740 env[1317]: time="2025-09-10T00:46:48.706698449Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:48.707669 env[1317]: time="2025-09-10T00:46:48.707640338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:48.709364 env[1317]: time="2025-09-10T00:46:48.709337469Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:48.710781 env[1317]: time="2025-09-10T00:46:48.710739913Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:48.713465 env[1317]: time="2025-09-10T00:46:48.713436924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:48.714806 env[1317]: time="2025-09-10T00:46:48.714769825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:48.716117 env[1317]: time="2025-09-10T00:46:48.716083689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:48.717607 env[1317]: time="2025-09-10T00:46:48.717575875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:48.719603 env[1317]: time="2025-09-10T00:46:48.719568423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:46:48.744925 env[1317]: time="2025-09-10T00:46:48.743312568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:46:48.744925 env[1317]: time="2025-09-10T00:46:48.743378305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:46:48.744925 env[1317]: time="2025-09-10T00:46:48.743393063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:46:48.744925 env[1317]: time="2025-09-10T00:46:48.743971515Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1f0a4e1124789d59a3b57b393e8aa0015a8bdbc20b746e189e9adce5417c13a pid=1756 runtime=io.containerd.runc.v2 Sep 10 00:46:48.751636 env[1317]: time="2025-09-10T00:46:48.751554542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:46:48.751636 env[1317]: time="2025-09-10T00:46:48.751594680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:46:48.751636 env[1317]: time="2025-09-10T00:46:48.751605019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:46:48.751848 env[1317]: time="2025-09-10T00:46:48.751723998Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bf29a329431be1bf9e6f5177eda098a4c4a0264eb585bcb53768c9ac4187218 pid=1782 runtime=io.containerd.runc.v2 Sep 10 00:46:48.751848 env[1317]: time="2025-09-10T00:46:48.751804422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:46:48.751930 env[1317]: time="2025-09-10T00:46:48.751844850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:46:48.751930 env[1317]: time="2025-09-10T00:46:48.751854939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:46:48.752014 env[1317]: time="2025-09-10T00:46:48.751975220Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ea77f5484cc94ba4738e9973fea70f2f0b0294d67a146bba8791d915da4df31 pid=1786 runtime=io.containerd.runc.v2 Sep 10 00:46:48.802931 env[1317]: time="2025-09-10T00:46:48.802371877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bf29a329431be1bf9e6f5177eda098a4c4a0264eb585bcb53768c9ac4187218\"" Sep 10 00:46:48.803527 kubelet[1715]: E0910 00:46:48.803496 1715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:48.804093 env[1317]: time="2025-09-10T00:46:48.804051233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:13de773fd8519b2cda02f00ffe9cf5fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ea77f5484cc94ba4738e9973fea70f2f0b0294d67a146bba8791d915da4df31\"" Sep 10 00:46:48.804247 env[1317]: time="2025-09-10T00:46:48.804208566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1f0a4e1124789d59a3b57b393e8aa0015a8bdbc20b746e189e9adce5417c13a\"" Sep 10 00:46:48.804766 kubelet[1715]: E0910 00:46:48.804737 1715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:48.805602 kubelet[1715]: E0910 00:46:48.805297 1715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:48.806922 env[1317]: time="2025-09-10T00:46:48.806867905Z" level=info msg="CreateContainer within sandbox \"1bf29a329431be1bf9e6f5177eda098a4c4a0264eb585bcb53768c9ac4187218\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:46:48.807448 env[1317]: time="2025-09-10T00:46:48.807423392Z" level=info msg="CreateContainer within sandbox \"2ea77f5484cc94ba4738e9973fea70f2f0b0294d67a146bba8791d915da4df31\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:46:48.807516 env[1317]: time="2025-09-10T00:46:48.807482135Z" level=info msg="CreateContainer within sandbox \"c1f0a4e1124789d59a3b57b393e8aa0015a8bdbc20b746e189e9adce5417c13a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:46:48.834043 env[1317]: time="2025-09-10T00:46:48.834005439Z" level=info msg="CreateContainer within sandbox \"2ea77f5484cc94ba4738e9973fea70f2f0b0294d67a146bba8791d915da4df31\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8411acb2efd3394f1dbc1093baf4e3535c83a16c5542444f22f72f1e2811e221\"" Sep 10 00:46:48.834719 env[1317]: time="2025-09-10T00:46:48.834694032Z" level=info msg="StartContainer for \"8411acb2efd3394f1dbc1093baf4e3535c83a16c5542444f22f72f1e2811e221\"" Sep 10 00:46:48.837150 env[1317]: time="2025-09-10T00:46:48.837125552Z" level=info msg="CreateContainer within sandbox \"1bf29a329431be1bf9e6f5177eda098a4c4a0264eb585bcb53768c9ac4187218\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"056fa00fe3873b122510cc0de79e0365ddb7300ec07c07f34b5850c9e55bf3fc\"" Sep 10 00:46:48.837594 env[1317]: time="2025-09-10T00:46:48.837576369Z" level=info msg="StartContainer for \"056fa00fe3873b122510cc0de79e0365ddb7300ec07c07f34b5850c9e55bf3fc\"" Sep 10 00:46:48.838625 env[1317]: time="2025-09-10T00:46:48.838584967Z" level=info msg="CreateContainer within sandbox \"c1f0a4e1124789d59a3b57b393e8aa0015a8bdbc20b746e189e9adce5417c13a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c3288cfb8de9948c4bdf1fc36e8bb331cdf3d646fd789d5e1caa4e943ae2c400\"" Sep 10 00:46:48.839100 env[1317]: time="2025-09-10T00:46:48.839040201Z" level=info msg="StartContainer for \"c3288cfb8de9948c4bdf1fc36e8bb331cdf3d646fd789d5e1caa4e943ae2c400\"" Sep 10 00:46:48.894125 env[1317]: time="2025-09-10T00:46:48.894074167Z" level=info msg="StartContainer for \"8411acb2efd3394f1dbc1093baf4e3535c83a16c5542444f22f72f1e2811e221\" returns successfully" Sep 10 00:46:48.899982 env[1317]: time="2025-09-10T00:46:48.899933894Z" level=info msg="StartContainer for \"056fa00fe3873b122510cc0de79e0365ddb7300ec07c07f34b5850c9e55bf3fc\" returns successfully" Sep 10 00:46:48.920218 env[1317]: time="2025-09-10T00:46:48.920164018Z" level=info msg="StartContainer for \"c3288cfb8de9948c4bdf1fc36e8bb331cdf3d646fd789d5e1caa4e943ae2c400\" returns successfully" Sep 10 00:46:48.972480 kubelet[1715]: E0910 00:46:48.972447 1715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:48.974020 kubelet[1715]: E0910 00:46:48.974001 1715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:48.975401 kubelet[1715]: E0910 00:46:48.975381 1715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:49.977073 kubelet[1715]: E0910 00:46:49.977042 1715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:50.073478 kubelet[1715]: I0910 00:46:50.073429 1715 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:46:50.301552 kubelet[1715]: E0910 00:46:50.301412 1715 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:50.833510 kubelet[1715]: I0910 00:46:50.833463 1715 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:46:50.926368 kubelet[1715]: I0910 00:46:50.926324 1715 apiserver.go:52] "Watching apiserver" Sep 10 00:46:50.948205 kubelet[1715]: I0910 00:46:50.948112 1715 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:46:51.108408 kubelet[1715]: E0910 00:46:51.108241 1715 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Sep 10 00:46:52.923558 systemd[1]: Reloading. Sep 10 00:46:53.009287 /usr/lib/systemd/system-generators/torcx-generator[2012]: time="2025-09-10T00:46:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:46:53.009315 /usr/lib/systemd/system-generators/torcx-generator[2012]: time="2025-09-10T00:46:53Z" level=info msg="torcx already run" Sep 10 00:46:53.083962 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:46:53.083983 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:46:53.107135 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:46:53.188408 systemd[1]: Stopping kubelet.service... Sep 10 00:46:53.210701 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:46:53.211154 systemd[1]: Stopped kubelet.service. Sep 10 00:46:53.213596 systemd[1]: Starting kubelet.service... Sep 10 00:46:53.313608 systemd[1]: Started kubelet.service. Sep 10 00:46:53.468284 kubelet[2069]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:46:53.468284 kubelet[2069]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:46:53.468284 kubelet[2069]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:46:53.468284 kubelet[2069]: I0910 00:46:53.468245 2069 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:46:53.478011 kubelet[2069]: I0910 00:46:53.477961 2069 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:46:53.478011 kubelet[2069]: I0910 00:46:53.477996 2069 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:46:53.478268 kubelet[2069]: I0910 00:46:53.478244 2069 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:46:53.479454 kubelet[2069]: I0910 00:46:53.479423 2069 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 00:46:53.482808 kubelet[2069]: I0910 00:46:53.482764 2069 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:46:53.486829 kubelet[2069]: E0910 00:46:53.486794 2069 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:46:53.486829 kubelet[2069]: I0910 00:46:53.486825 2069 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:46:53.490556 kubelet[2069]: I0910 00:46:53.490514 2069 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:46:53.490878 kubelet[2069]: I0910 00:46:53.490851 2069 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:46:53.491003 kubelet[2069]: I0910 00:46:53.490964 2069 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:46:53.491196 kubelet[2069]: I0910 00:46:53.491002 2069 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 10 00:46:53.491196 kubelet[2069]: I0910 00:46:53.491196 2069 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:46:53.491311 kubelet[2069]: I0910 00:46:53.491205 2069 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:46:53.491311 kubelet[2069]: I0910 00:46:53.491241 2069 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:46:53.491363 kubelet[2069]: I0910 00:46:53.491323 2069 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:46:53.491363 kubelet[2069]: I0910 00:46:53.491335 2069 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:46:53.491363 kubelet[2069]: I0910 00:46:53.491360 2069 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:46:53.491449 kubelet[2069]: I0910 00:46:53.491373 2069 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:46:53.493026 kubelet[2069]: I0910 00:46:53.493008 2069 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 10 00:46:53.493705 kubelet[2069]: I0910 00:46:53.493679 2069 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:46:53.494541 kubelet[2069]: I0910 00:46:53.494526 2069 server.go:1274] "Started kubelet" Sep 10 00:46:53.496759 kubelet[2069]: I0910 00:46:53.496706 2069 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:46:53.498128 kubelet[2069]: I0910 00:46:53.498099 2069 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:46:53.499828 kubelet[2069]: I0910 00:46:53.499792 2069 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:46:53.500071 kubelet[2069]: E0910 00:46:53.500035 2069 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:46:53.501064 kubelet[2069]: I0910 00:46:53.501036 2069 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:46:53.501253 kubelet[2069]: I0910 00:46:53.501231 2069 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:46:53.501348 kubelet[2069]: I0910 00:46:53.501328 2069 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:46:53.501575 kubelet[2069]: I0910 00:46:53.501555 2069 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:46:53.503882 kubelet[2069]: I0910 00:46:53.503682 2069 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:46:53.504013 kubelet[2069]: I0910 00:46:53.503977 2069 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:46:53.504043 kubelet[2069]: I0910 00:46:53.504024 2069 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:46:53.505714 kubelet[2069]: I0910 00:46:53.505674 2069 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:46:53.509250 kubelet[2069]: E0910 00:46:53.509204 2069 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:46:53.509953 kubelet[2069]: I0910 00:46:53.509924 2069 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:46:53.516848 kubelet[2069]: I0910 00:46:53.516782 2069 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:46:53.517652 kubelet[2069]: I0910 00:46:53.517625 2069 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:46:53.517652 kubelet[2069]: I0910 00:46:53.517648 2069 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:46:53.517749 kubelet[2069]: I0910 00:46:53.517668 2069 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:46:53.517749 kubelet[2069]: E0910 00:46:53.517719 2069 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:46:53.562340 kubelet[2069]: I0910 00:46:53.562296 2069 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:46:53.562340 kubelet[2069]: I0910 00:46:53.562328 2069 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:46:53.562527 kubelet[2069]: I0910 00:46:53.562354 2069 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:46:53.562592 kubelet[2069]: I0910 00:46:53.562569 2069 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:46:53.562617 kubelet[2069]: I0910 00:46:53.562592 2069 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:46:53.562640 kubelet[2069]: I0910 00:46:53.562619 2069 policy_none.go:49] "None policy: Start" Sep 10 00:46:53.563320 kubelet[2069]: I0910 00:46:53.563298 2069 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:46:53.563457 kubelet[2069]: I0910 00:46:53.563408 2069 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:46:53.563669 kubelet[2069]: I0910 00:46:53.563589 2069 state_mem.go:75] "Updated machine memory state" Sep 10 00:46:53.564720 kubelet[2069]: I0910 00:46:53.564686 2069 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:46:53.564858 kubelet[2069]: I0910 00:46:53.564837 2069 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:46:53.564887 kubelet[2069]: I0910 00:46:53.564854 2069 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:46:53.566028 kubelet[2069]: I0910 00:46:53.565725 2069 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:46:53.668678 kubelet[2069]: I0910 00:46:53.668623 2069 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:46:53.674590 kubelet[2069]: I0910 00:46:53.674558 2069 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 00:46:53.674730 kubelet[2069]: I0910 00:46:53.674638 2069 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:46:53.702099 kubelet[2069]: I0910 00:46:53.702051 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:46:53.702099 kubelet[2069]: I0910 00:46:53.702092 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:46:53.702298 kubelet[2069]: I0910 00:46:53.702110 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:46:53.702298 kubelet[2069]: I0910 00:46:53.702133 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13de773fd8519b2cda02f00ffe9cf5fa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"13de773fd8519b2cda02f00ffe9cf5fa\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:46:53.702298 kubelet[2069]: I0910 00:46:53.702190 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:46:53.702298 kubelet[2069]: I0910 00:46:53.702268 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:46:53.702400 kubelet[2069]: I0910 00:46:53.702300 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:46:53.702400 kubelet[2069]: I0910 00:46:53.702321 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13de773fd8519b2cda02f00ffe9cf5fa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"13de773fd8519b2cda02f00ffe9cf5fa\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:46:53.702400 kubelet[2069]: I0910 00:46:53.702340 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13de773fd8519b2cda02f00ffe9cf5fa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"13de773fd8519b2cda02f00ffe9cf5fa\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:46:53.914080 sudo[2104]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 00:46:53.914300 sudo[2104]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 10 00:46:53.926944 kubelet[2069]: E0910 00:46:53.926885 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:53.927909 kubelet[2069]: E0910 00:46:53.927858 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:53.928135 kubelet[2069]: E0910 00:46:53.928102 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:54.528650 kubelet[2069]: I0910 00:46:54.526665 2069 apiserver.go:52] "Watching apiserver" Sep 10 00:46:54.537134 kubelet[2069]: E0910 00:46:54.537095 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:54.538184 kubelet[2069]: E0910 00:46:54.538141 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:54.592540 sudo[2104]: pam_unix(sudo:session): session closed for user root Sep 10 00:46:54.602373 kubelet[2069]: I0910 00:46:54.602325 2069 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:46:54.681319 kubelet[2069]: E0910 00:46:54.681261 2069 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:46:54.681505 kubelet[2069]: E0910 00:46:54.681469 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:54.699131 kubelet[2069]: I0910 00:46:54.699048 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.699002063 podStartE2EDuration="1.699002063s" podCreationTimestamp="2025-09-10 00:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:46:54.69875875 +0000 UTC m=+1.380353947" watchObservedRunningTime="2025-09-10 00:46:54.699002063 +0000 UTC m=+1.380597260" Sep 10 00:46:54.717380 kubelet[2069]: I0910 00:46:54.717323 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7173125790000001 podStartE2EDuration="1.717312579s" podCreationTimestamp="2025-09-10 00:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:46:54.717098921 +0000 UTC m=+1.398694118" watchObservedRunningTime="2025-09-10 00:46:54.717312579 +0000 UTC m=+1.398907776" Sep 10 00:46:54.717380 kubelet[2069]: I0910 00:46:54.717390 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.717386389 podStartE2EDuration="1.717386389s" podCreationTimestamp="2025-09-10 00:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:46:54.708477557 +0000 UTC m=+1.390072764" watchObservedRunningTime="2025-09-10 00:46:54.717386389 +0000 UTC m=+1.398981576" Sep 10 00:46:55.538605 kubelet[2069]: E0910 00:46:55.538527 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:55.539951 kubelet[2069]: E0910 00:46:55.538663 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:56.445911 sudo[1428]: pam_unix(sudo:session): session closed for user root Sep 10 00:46:56.447422 sshd[1422]: pam_unix(sshd:session): session closed for user core Sep 10 00:46:56.449992 systemd[1]: sshd@4-10.0.0.93:22-10.0.0.1:40940.service: Deactivated successfully. Sep 10 00:46:56.450883 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:46:56.450886 systemd-logind[1293]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:46:56.451768 systemd-logind[1293]: Removed session 5. Sep 10 00:46:56.540124 kubelet[2069]: E0910 00:46:56.540087 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:56.782799 kubelet[2069]: E0910 00:46:56.782658 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:57.519287 kubelet[2069]: I0910 00:46:57.519156 2069 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:46:57.519460 env[1317]: time="2025-09-10T00:46:57.519432931Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:46:57.519755 kubelet[2069]: I0910 00:46:57.519565 2069 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:46:58.608903 kubelet[2069]: I0910 00:46:58.608833 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-host-proc-sys-net\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.608903 kubelet[2069]: I0910 00:46:58.608873 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-hostproc\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.608903 kubelet[2069]: I0910 00:46:58.608900 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-lib-modules\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.608903 kubelet[2069]: I0910 00:46:58.608916 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-xtables-lock\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.609402 kubelet[2069]: I0910 00:46:58.608944 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-host-proc-sys-kernel\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.609402 kubelet[2069]: I0910 00:46:58.608979 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hszql\" (UniqueName: \"kubernetes.io/projected/f5b75624-00a0-4562-8f5e-1120484bbc42-kube-api-access-hszql\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.609402 kubelet[2069]: I0910 00:46:58.608999 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-462h6\" (UniqueName: \"kubernetes.io/projected/fd224615-2d7e-4a06-b90b-a1f41317339c-kube-api-access-462h6\") pod \"kube-proxy-87mjl\" (UID: \"fd224615-2d7e-4a06-b90b-a1f41317339c\") " pod="kube-system/kube-proxy-87mjl" Sep 10 00:46:58.609402 kubelet[2069]: I0910 00:46:58.609025 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5b75624-00a0-4562-8f5e-1120484bbc42-clustermesh-secrets\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.609402 kubelet[2069]: I0910 00:46:58.609043 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5b75624-00a0-4562-8f5e-1120484bbc42-cilium-config-path\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.609530 kubelet[2069]: I0910 00:46:58.609068 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-etc-cni-netd\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.609530 kubelet[2069]: I0910 00:46:58.609083 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd224615-2d7e-4a06-b90b-a1f41317339c-kube-proxy\") pod \"kube-proxy-87mjl\" (UID: \"fd224615-2d7e-4a06-b90b-a1f41317339c\") " pod="kube-system/kube-proxy-87mjl" Sep 10 00:46:58.609530 kubelet[2069]: I0910 00:46:58.609106 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-bpf-maps\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.609530 kubelet[2069]: I0910 00:46:58.609122 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-cni-path\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.609530 kubelet[2069]: I0910 00:46:58.609136 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5b75624-00a0-4562-8f5e-1120484bbc42-hubble-tls\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.609530 kubelet[2069]: I0910 00:46:58.609151 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd224615-2d7e-4a06-b90b-a1f41317339c-lib-modules\") pod \"kube-proxy-87mjl\" (UID: \"fd224615-2d7e-4a06-b90b-a1f41317339c\") " pod="kube-system/kube-proxy-87mjl" Sep 10 00:46:58.609670 kubelet[2069]: I0910 00:46:58.609164 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-cilium-run\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.609670 kubelet[2069]: I0910 00:46:58.609175 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-cilium-cgroup\") pod \"cilium-2gc54\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " pod="kube-system/cilium-2gc54" Sep 10 00:46:58.609670 kubelet[2069]: I0910 00:46:58.609193 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd224615-2d7e-4a06-b90b-a1f41317339c-xtables-lock\") pod \"kube-proxy-87mjl\" (UID: \"fd224615-2d7e-4a06-b90b-a1f41317339c\") " pod="kube-system/kube-proxy-87mjl" Sep 10 00:46:58.709550 kubelet[2069]: I0910 00:46:58.709499 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d1368e8-7260-453c-a28a-fb897824542d-cilium-config-path\") pod \"cilium-operator-5d85765b45-wr952\" (UID: \"3d1368e8-7260-453c-a28a-fb897824542d\") " pod="kube-system/cilium-operator-5d85765b45-wr952" Sep 10 00:46:58.709945 kubelet[2069]: I0910 00:46:58.709876 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dd5k\" (UniqueName: \"kubernetes.io/projected/3d1368e8-7260-453c-a28a-fb897824542d-kube-api-access-9dd5k\") pod \"cilium-operator-5d85765b45-wr952\" (UID: \"3d1368e8-7260-453c-a28a-fb897824542d\") " pod="kube-system/cilium-operator-5d85765b45-wr952" Sep 10 00:46:58.710037 kubelet[2069]: I0910 00:46:58.709934 2069 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 10 00:46:58.758240 kubelet[2069]: E0910 00:46:58.758210 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:58.758742 env[1317]: time="2025-09-10T00:46:58.758679131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-87mjl,Uid:fd224615-2d7e-4a06-b90b-a1f41317339c,Namespace:kube-system,Attempt:0,}" Sep 10 00:46:58.764185 kubelet[2069]: E0910 00:46:58.764152 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:58.765404 env[1317]: time="2025-09-10T00:46:58.765358947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2gc54,Uid:f5b75624-00a0-4562-8f5e-1120484bbc42,Namespace:kube-system,Attempt:0,}" Sep 10 00:46:58.779584 env[1317]: time="2025-09-10T00:46:58.779524770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:46:58.779737 env[1317]: time="2025-09-10T00:46:58.779564917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:46:58.779737 env[1317]: time="2025-09-10T00:46:58.779575527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:46:58.779737 env[1317]: time="2025-09-10T00:46:58.779692729Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24635261adbdaaddb8f06a51e22b6b560c3e348a9c631d497df2e23986d46ea9 pid=2164 runtime=io.containerd.runc.v2 Sep 10 00:46:58.784771 env[1317]: time="2025-09-10T00:46:58.784669882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:46:58.784771 env[1317]: time="2025-09-10T00:46:58.784768249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:46:58.784954 env[1317]: time="2025-09-10T00:46:58.784792676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:46:58.785937 env[1317]: time="2025-09-10T00:46:58.785814395Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205 pid=2179 runtime=io.containerd.runc.v2 Sep 10 00:46:58.829305 env[1317]: time="2025-09-10T00:46:58.829163453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2gc54,Uid:f5b75624-00a0-4562-8f5e-1120484bbc42,Namespace:kube-system,Attempt:0,} returns sandbox id \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\"" Sep 10 00:46:58.830039 kubelet[2069]: E0910 00:46:58.830012 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:58.831426 env[1317]: time="2025-09-10T00:46:58.831394350Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 00:46:58.832230 env[1317]: time="2025-09-10T00:46:58.832188438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-87mjl,Uid:fd224615-2d7e-4a06-b90b-a1f41317339c,Namespace:kube-system,Attempt:0,} returns sandbox id \"24635261adbdaaddb8f06a51e22b6b560c3e348a9c631d497df2e23986d46ea9\"" Sep 10 00:46:58.832849 kubelet[2069]: E0910 00:46:58.832828 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:58.834665 env[1317]: time="2025-09-10T00:46:58.834635836Z" level=info msg="CreateContainer within sandbox \"24635261adbdaaddb8f06a51e22b6b560c3e348a9c631d497df2e23986d46ea9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:46:58.955084 kubelet[2069]: E0910 00:46:58.954950 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:58.955481 env[1317]: time="2025-09-10T00:46:58.955408095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wr952,Uid:3d1368e8-7260-453c-a28a-fb897824542d,Namespace:kube-system,Attempt:0,}" Sep 10 00:46:59.078489 update_engine[1298]: I0910 00:46:59.078395 1298 update_attempter.cc:509] Updating boot flags... Sep 10 00:46:59.104768 env[1317]: time="2025-09-10T00:46:59.104696616Z" level=info msg="CreateContainer within sandbox \"24635261adbdaaddb8f06a51e22b6b560c3e348a9c631d497df2e23986d46ea9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"debc82f87a00a03091cd5cef06c5663e122e40b7fe0cb90437963e1232b4c437\"" Sep 10 00:46:59.111537 env[1317]: time="2025-09-10T00:46:59.107764788Z" level=info msg="StartContainer for \"debc82f87a00a03091cd5cef06c5663e122e40b7fe0cb90437963e1232b4c437\"" Sep 10 00:46:59.119208 env[1317]: time="2025-09-10T00:46:59.119104587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:46:59.119208 env[1317]: time="2025-09-10T00:46:59.119195430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:46:59.119401 env[1317]: time="2025-09-10T00:46:59.119219094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:46:59.119401 env[1317]: time="2025-09-10T00:46:59.119343681Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd pid=2262 runtime=io.containerd.runc.v2 Sep 10 00:46:59.186734 env[1317]: time="2025-09-10T00:46:59.186673180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wr952,Uid:3d1368e8-7260-453c-a28a-fb897824542d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd\"" Sep 10 00:46:59.188551 kubelet[2069]: E0910 00:46:59.187613 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:46:59.211357 env[1317]: time="2025-09-10T00:46:59.211249026Z" level=info msg="StartContainer for \"debc82f87a00a03091cd5cef06c5663e122e40b7fe0cb90437963e1232b4c437\" returns successfully" Sep 10 00:46:59.549936 kubelet[2069]: E0910 00:46:59.549805 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:04.109861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794369977.mount: Deactivated successfully. Sep 10 00:47:05.334653 kubelet[2069]: E0910 00:47:05.334582 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:05.344469 kubelet[2069]: I0910 00:47:05.344163 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-87mjl" podStartSLOduration=7.344139532 podStartE2EDuration="7.344139532s" podCreationTimestamp="2025-09-10 00:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:46:59.801972014 +0000 UTC m=+6.483567221" watchObservedRunningTime="2025-09-10 00:47:05.344139532 +0000 UTC m=+12.025734729" Sep 10 00:47:05.411783 kubelet[2069]: E0910 00:47:05.411724 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:06.786470 kubelet[2069]: E0910 00:47:06.786434 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:08.038287 env[1317]: time="2025-09-10T00:47:08.038210041Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:47:08.040684 env[1317]: time="2025-09-10T00:47:08.040622364Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:47:08.042356 env[1317]: time="2025-09-10T00:47:08.042325459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:47:08.043170 env[1317]: time="2025-09-10T00:47:08.043125569Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 10 00:47:08.047317 env[1317]: time="2025-09-10T00:47:08.047248032Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 00:47:08.049863 env[1317]: time="2025-09-10T00:47:08.049826118Z" level=info msg="CreateContainer within sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:47:08.062251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount584418149.mount: Deactivated successfully. Sep 10 00:47:08.063972 env[1317]: time="2025-09-10T00:47:08.063876140Z" level=info msg="CreateContainer within sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1\"" Sep 10 00:47:08.064406 env[1317]: time="2025-09-10T00:47:08.064364512Z" level=info msg="StartContainer for \"618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1\"" Sep 10 00:47:08.401127 env[1317]: time="2025-09-10T00:47:08.401026672Z" level=info msg="StartContainer for \"618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1\" returns successfully" Sep 10 00:47:08.425937 env[1317]: time="2025-09-10T00:47:08.425865751Z" level=info msg="shim disconnected" id=618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1 Sep 10 00:47:08.425937 env[1317]: time="2025-09-10T00:47:08.425935242Z" level=warning msg="cleaning up after shim disconnected" id=618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1 namespace=k8s.io Sep 10 00:47:08.425937 env[1317]: time="2025-09-10T00:47:08.425945411Z" level=info msg="cleaning up dead shim" Sep 10 00:47:08.433149 env[1317]: time="2025-09-10T00:47:08.433117530Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:47:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2507 runtime=io.containerd.runc.v2\n" Sep 10 00:47:08.572054 kubelet[2069]: E0910 00:47:08.572016 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:08.573835 env[1317]: time="2025-09-10T00:47:08.573781478Z" level=info msg="CreateContainer within sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:47:08.590194 env[1317]: time="2025-09-10T00:47:08.588878737Z" level=info msg="CreateContainer within sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111\"" Sep 10 00:47:08.590194 env[1317]: time="2025-09-10T00:47:08.589782713Z" level=info msg="StartContainer for \"4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111\"" Sep 10 00:47:08.632775 env[1317]: time="2025-09-10T00:47:08.632709235Z" level=info msg="StartContainer for \"4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111\" returns successfully" Sep 10 00:47:08.641218 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:47:08.641724 systemd[1]: Stopped systemd-sysctl.service. Sep 10 00:47:08.641983 systemd[1]: Stopping systemd-sysctl.service... Sep 10 00:47:08.643482 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:47:08.654046 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:47:08.662410 env[1317]: time="2025-09-10T00:47:08.662355838Z" level=info msg="shim disconnected" id=4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111 Sep 10 00:47:08.662410 env[1317]: time="2025-09-10T00:47:08.662407987Z" level=warning msg="cleaning up after shim disconnected" id=4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111 namespace=k8s.io Sep 10 00:47:08.662584 env[1317]: time="2025-09-10T00:47:08.662416924Z" level=info msg="cleaning up dead shim" Sep 10 00:47:08.670939 env[1317]: time="2025-09-10T00:47:08.670880841Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:47:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2572 runtime=io.containerd.runc.v2\n" Sep 10 00:47:09.060960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1-rootfs.mount: Deactivated successfully. Sep 10 00:47:09.425598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354824100.mount: Deactivated successfully. Sep 10 00:47:09.575553 kubelet[2069]: E0910 00:47:09.575506 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:09.578676 env[1317]: time="2025-09-10T00:47:09.577884207Z" level=info msg="CreateContainer within sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:47:09.838348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171379707.mount: Deactivated successfully. Sep 10 00:47:09.847985 env[1317]: time="2025-09-10T00:47:09.847919252Z" level=info msg="CreateContainer within sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c\"" Sep 10 00:47:09.848664 env[1317]: time="2025-09-10T00:47:09.848463950Z" level=info msg="StartContainer for \"2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c\"" Sep 10 00:47:09.909152 env[1317]: time="2025-09-10T00:47:09.909077552Z" level=info msg="StartContainer for \"2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c\" returns successfully" Sep 10 00:47:09.938933 env[1317]: time="2025-09-10T00:47:09.938867282Z" level=info msg="shim disconnected" id=2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c Sep 10 00:47:09.938933 env[1317]: time="2025-09-10T00:47:09.938926372Z" level=warning msg="cleaning up after shim disconnected" id=2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c namespace=k8s.io Sep 10 00:47:09.938933 env[1317]: time="2025-09-10T00:47:09.938937204Z" level=info msg="cleaning up dead shim" Sep 10 00:47:09.947519 env[1317]: time="2025-09-10T00:47:09.947460726Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:47:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2629 runtime=io.containerd.runc.v2\n" Sep 10 00:47:10.441074 env[1317]: time="2025-09-10T00:47:10.441005173Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:47:10.442916 env[1317]: time="2025-09-10T00:47:10.442849171Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:47:10.444797 env[1317]: time="2025-09-10T00:47:10.444762109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:47:10.445407 env[1317]: time="2025-09-10T00:47:10.445359686Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 10 00:47:10.447649 env[1317]: time="2025-09-10T00:47:10.447593650Z" level=info msg="CreateContainer within sandbox \"ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 00:47:10.459105 env[1317]: time="2025-09-10T00:47:10.459032375Z" level=info msg="CreateContainer within sandbox \"ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"413cdfbfe2bb7b9b0c19f0397fdb46bcdc16d62700503c2fac1e3428b10d3336\"" Sep 10 00:47:10.459774 env[1317]: time="2025-09-10T00:47:10.459737194Z" level=info msg="StartContainer for \"413cdfbfe2bb7b9b0c19f0397fdb46bcdc16d62700503c2fac1e3428b10d3336\"" Sep 10 00:47:10.507449 env[1317]: time="2025-09-10T00:47:10.507385425Z" level=info msg="StartContainer for \"413cdfbfe2bb7b9b0c19f0397fdb46bcdc16d62700503c2fac1e3428b10d3336\" returns successfully" Sep 10 00:47:10.580497 kubelet[2069]: E0910 00:47:10.580443 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:10.584551 kubelet[2069]: E0910 00:47:10.584507 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:10.589964 env[1317]: time="2025-09-10T00:47:10.589904929Z" level=info msg="CreateContainer within sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:47:10.611778 env[1317]: time="2025-09-10T00:47:10.611694838Z" level=info msg="CreateContainer within sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec\"" Sep 10 00:47:10.613937 env[1317]: time="2025-09-10T00:47:10.613867235Z" level=info msg="StartContainer for \"bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec\"" Sep 10 00:47:10.643939 kubelet[2069]: I0910 00:47:10.643678 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-wr952" podStartSLOduration=1.3893129 podStartE2EDuration="12.643651904s" podCreationTimestamp="2025-09-10 00:46:58 +0000 UTC" firstStartedPulling="2025-09-10 00:46:59.192113163 +0000 UTC m=+5.873708360" lastFinishedPulling="2025-09-10 00:47:10.446452177 +0000 UTC m=+17.128047364" observedRunningTime="2025-09-10 00:47:10.601803662 +0000 UTC m=+17.283398859" watchObservedRunningTime="2025-09-10 00:47:10.643651904 +0000 UTC m=+17.325247101" Sep 10 00:47:11.185095 env[1317]: time="2025-09-10T00:47:11.185020095Z" level=info msg="StartContainer for \"bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec\" returns successfully" Sep 10 00:47:11.202256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec-rootfs.mount: Deactivated successfully. Sep 10 00:47:11.290351 env[1317]: time="2025-09-10T00:47:11.290278995Z" level=info msg="shim disconnected" id=bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec Sep 10 00:47:11.290728 env[1317]: time="2025-09-10T00:47:11.290704668Z" level=warning msg="cleaning up after shim disconnected" id=bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec namespace=k8s.io Sep 10 00:47:11.290828 env[1317]: time="2025-09-10T00:47:11.290803675Z" level=info msg="cleaning up dead shim" Sep 10 00:47:11.308497 env[1317]: time="2025-09-10T00:47:11.308430900Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:47:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2727 runtime=io.containerd.runc.v2\n" Sep 10 00:47:11.590369 kubelet[2069]: E0910 00:47:11.590272 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:11.590967 kubelet[2069]: E0910 00:47:11.590551 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:11.596266 env[1317]: time="2025-09-10T00:47:11.596220743Z" level=info msg="CreateContainer within sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:47:11.685706 env[1317]: time="2025-09-10T00:47:11.685616073Z" level=info msg="CreateContainer within sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\"" Sep 10 00:47:11.686425 env[1317]: time="2025-09-10T00:47:11.686383100Z" level=info msg="StartContainer for \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\"" Sep 10 00:47:11.760200 env[1317]: time="2025-09-10T00:47:11.760121349Z" level=info msg="StartContainer for \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\" returns successfully" Sep 10 00:47:11.940011 kubelet[2069]: I0910 00:47:11.939850 2069 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 00:47:12.006867 kubelet[2069]: I0910 00:47:12.006790 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eebefa23-6f8b-47da-bbfe-8afa503f90c8-config-volume\") pod \"coredns-7c65d6cfc9-7l5bx\" (UID: \"eebefa23-6f8b-47da-bbfe-8afa503f90c8\") " pod="kube-system/coredns-7c65d6cfc9-7l5bx" Sep 10 00:47:12.006867 kubelet[2069]: I0910 00:47:12.006848 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9j2t\" (UniqueName: \"kubernetes.io/projected/8c062f1a-ac12-4925-bfa8-0edbb28af6ea-kube-api-access-w9j2t\") pod \"coredns-7c65d6cfc9-gmw2k\" (UID: \"8c062f1a-ac12-4925-bfa8-0edbb28af6ea\") " pod="kube-system/coredns-7c65d6cfc9-gmw2k" Sep 10 00:47:12.006867 kubelet[2069]: I0910 00:47:12.006874 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c062f1a-ac12-4925-bfa8-0edbb28af6ea-config-volume\") pod \"coredns-7c65d6cfc9-gmw2k\" (UID: \"8c062f1a-ac12-4925-bfa8-0edbb28af6ea\") " pod="kube-system/coredns-7c65d6cfc9-gmw2k" Sep 10 00:47:12.007167 kubelet[2069]: I0910 00:47:12.006921 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sgjz\" (UniqueName: \"kubernetes.io/projected/eebefa23-6f8b-47da-bbfe-8afa503f90c8-kube-api-access-5sgjz\") pod \"coredns-7c65d6cfc9-7l5bx\" (UID: \"eebefa23-6f8b-47da-bbfe-8afa503f90c8\") " pod="kube-system/coredns-7c65d6cfc9-7l5bx" Sep 10 00:47:12.317278 kubelet[2069]: E0910 00:47:12.317027 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:12.317598 kubelet[2069]: E0910 00:47:12.317260 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:12.319222 env[1317]: time="2025-09-10T00:47:12.318762732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7l5bx,Uid:eebefa23-6f8b-47da-bbfe-8afa503f90c8,Namespace:kube-system,Attempt:0,}" Sep 10 00:47:12.320039 env[1317]: time="2025-09-10T00:47:12.319960770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gmw2k,Uid:8c062f1a-ac12-4925-bfa8-0edbb28af6ea,Namespace:kube-system,Attempt:0,}" Sep 10 00:47:12.594727 kubelet[2069]: E0910 00:47:12.594675 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:13.595919 kubelet[2069]: E0910 00:47:13.595867 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:14.017921 systemd-networkd[1076]: cilium_host: Link UP Sep 10 00:47:14.018105 systemd-networkd[1076]: cilium_net: Link UP Sep 10 00:47:14.018109 systemd-networkd[1076]: cilium_net: Gained carrier Sep 10 00:47:14.020433 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 10 00:47:14.020768 systemd-networkd[1076]: cilium_host: Gained carrier Sep 10 00:47:14.021249 systemd-networkd[1076]: cilium_host: Gained IPv6LL Sep 10 00:47:14.056412 systemd-networkd[1076]: cilium_net: Gained IPv6LL Sep 10 00:47:14.108857 systemd-networkd[1076]: cilium_vxlan: Link UP Sep 10 00:47:14.108863 systemd-networkd[1076]: cilium_vxlan: Gained carrier Sep 10 00:47:14.332948 kernel: NET: Registered PF_ALG protocol family Sep 10 00:47:14.598728 kubelet[2069]: E0910 00:47:14.598575 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:15.010570 systemd-networkd[1076]: lxc_health: Link UP Sep 10 00:47:15.023949 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 10 00:47:15.025990 systemd-networkd[1076]: lxc_health: Gained carrier Sep 10 00:47:15.486579 systemd-networkd[1076]: lxc6783bf0e94ee: Link UP Sep 10 00:47:15.494026 kernel: eth0: renamed from tmpdf48e Sep 10 00:47:15.502003 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6783bf0e94ee: link becomes ready Sep 10 00:47:15.509413 systemd-networkd[1076]: lxc6783bf0e94ee: Gained carrier Sep 10 00:47:15.513191 systemd-networkd[1076]: lxcd778a04cfe03: Link UP Sep 10 00:47:15.521345 kernel: eth0: renamed from tmpd78cc Sep 10 00:47:15.528427 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd778a04cfe03: link becomes ready Sep 10 00:47:15.528189 systemd-networkd[1076]: lxcd778a04cfe03: Gained carrier Sep 10 00:47:15.738102 systemd-networkd[1076]: cilium_vxlan: Gained IPv6LL Sep 10 00:47:16.766528 kubelet[2069]: E0910 00:47:16.766450 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:16.786414 kubelet[2069]: I0910 00:47:16.786336 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2gc54" podStartSLOduration=9.570208188 podStartE2EDuration="18.786311137s" podCreationTimestamp="2025-09-10 00:46:58 +0000 UTC" firstStartedPulling="2025-09-10 00:46:58.830805892 +0000 UTC m=+5.512401089" lastFinishedPulling="2025-09-10 00:47:08.046908841 +0000 UTC m=+14.728504038" observedRunningTime="2025-09-10 00:47:12.61593972 +0000 UTC m=+19.297534927" watchObservedRunningTime="2025-09-10 00:47:16.786311137 +0000 UTC m=+23.467906334" Sep 10 00:47:16.826287 systemd-networkd[1076]: lxcd778a04cfe03: Gained IPv6LL Sep 10 00:47:16.826660 systemd-networkd[1076]: lxc_health: Gained IPv6LL Sep 10 00:47:17.274195 systemd-networkd[1076]: lxc6783bf0e94ee: Gained IPv6LL Sep 10 00:47:17.604355 kubelet[2069]: E0910 00:47:17.604319 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:18.605931 kubelet[2069]: E0910 00:47:18.605881 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:19.804953 env[1317]: time="2025-09-10T00:47:19.804863323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:47:19.804953 env[1317]: time="2025-09-10T00:47:19.804922445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:47:19.804953 env[1317]: time="2025-09-10T00:47:19.804933035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:47:19.805413 env[1317]: time="2025-09-10T00:47:19.805097104Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df48e0c3f8f64a1e3348b89021b697aa758deb623af9a8e7be4d6efee6ad9648 pid=3291 runtime=io.containerd.runc.v2 Sep 10 00:47:19.820327 systemd[1]: run-containerd-runc-k8s.io-df48e0c3f8f64a1e3348b89021b697aa758deb623af9a8e7be4d6efee6ad9648-runc.KlrRUC.mount: Deactivated successfully. Sep 10 00:47:19.831685 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:47:19.853842 env[1317]: time="2025-09-10T00:47:19.853794725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7l5bx,Uid:eebefa23-6f8b-47da-bbfe-8afa503f90c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"df48e0c3f8f64a1e3348b89021b697aa758deb623af9a8e7be4d6efee6ad9648\"" Sep 10 00:47:19.854758 kubelet[2069]: E0910 00:47:19.854724 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:19.856695 env[1317]: time="2025-09-10T00:47:19.856648374Z" level=info msg="CreateContainer within sandbox \"df48e0c3f8f64a1e3348b89021b697aa758deb623af9a8e7be4d6efee6ad9648\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:47:19.870834 env[1317]: time="2025-09-10T00:47:19.870714946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:47:19.870834 env[1317]: time="2025-09-10T00:47:19.870763968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:47:19.870834 env[1317]: time="2025-09-10T00:47:19.870775440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:47:19.871111 env[1317]: time="2025-09-10T00:47:19.870934359Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d78cc3e493ecb49f0bafd67dafc5e54848b60b29436279376197607f08ac6a10 pid=3331 runtime=io.containerd.runc.v2 Sep 10 00:47:19.895371 systemd-resolved[1222]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:47:19.922366 env[1317]: time="2025-09-10T00:47:19.922309668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gmw2k,Uid:8c062f1a-ac12-4925-bfa8-0edbb28af6ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"d78cc3e493ecb49f0bafd67dafc5e54848b60b29436279376197607f08ac6a10\"" Sep 10 00:47:19.927172 kubelet[2069]: E0910 00:47:19.927122 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:19.928977 env[1317]: time="2025-09-10T00:47:19.928933639Z" level=info msg="CreateContainer within sandbox \"d78cc3e493ecb49f0bafd67dafc5e54848b60b29436279376197607f08ac6a10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:47:20.912369 env[1317]: time="2025-09-10T00:47:20.912290870Z" level=info msg="CreateContainer within sandbox \"df48e0c3f8f64a1e3348b89021b697aa758deb623af9a8e7be4d6efee6ad9648\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98692df16a1fae7b4281d8cd00a74dc63dea433b04991558180c9c173576ac0d\"" Sep 10 00:47:20.913115 env[1317]: time="2025-09-10T00:47:20.913081728Z" level=info msg="StartContainer for \"98692df16a1fae7b4281d8cd00a74dc63dea433b04991558180c9c173576ac0d\"" Sep 10 00:47:21.218109 env[1317]: time="2025-09-10T00:47:21.217975867Z" level=info msg="CreateContainer within sandbox \"d78cc3e493ecb49f0bafd67dafc5e54848b60b29436279376197607f08ac6a10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c054f1ac85353525a5a15094e882d04b659918c63d6aeefec6e18d2ab0b179af\"" Sep 10 00:47:21.219236 env[1317]: time="2025-09-10T00:47:21.219184801Z" level=info msg="StartContainer for \"c054f1ac85353525a5a15094e882d04b659918c63d6aeefec6e18d2ab0b179af\"" Sep 10 00:47:21.524560 env[1317]: time="2025-09-10T00:47:21.524386377Z" level=info msg="StartContainer for \"98692df16a1fae7b4281d8cd00a74dc63dea433b04991558180c9c173576ac0d\" returns successfully" Sep 10 00:47:21.657461 env[1317]: time="2025-09-10T00:47:21.657337659Z" level=info msg="StartContainer for \"c054f1ac85353525a5a15094e882d04b659918c63d6aeefec6e18d2ab0b179af\" returns successfully" Sep 10 00:47:21.661348 kubelet[2069]: E0910 00:47:21.661253 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:21.663600 kubelet[2069]: E0910 00:47:21.663550 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:22.045954 kubelet[2069]: I0910 00:47:22.045843 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gmw2k" podStartSLOduration=24.045812482 podStartE2EDuration="24.045812482s" podCreationTimestamp="2025-09-10 00:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:47:21.988190962 +0000 UTC m=+28.669786149" watchObservedRunningTime="2025-09-10 00:47:22.045812482 +0000 UTC m=+28.727407679" Sep 10 00:47:22.076161 kubelet[2069]: I0910 00:47:22.076077 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7l5bx" podStartSLOduration=24.076039043 podStartE2EDuration="24.076039043s" podCreationTimestamp="2025-09-10 00:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:47:22.047438669 +0000 UTC m=+28.729033866" watchObservedRunningTime="2025-09-10 00:47:22.076039043 +0000 UTC m=+28.757634250" Sep 10 00:47:22.665352 kubelet[2069]: E0910 00:47:22.665313 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:22.665791 kubelet[2069]: E0910 00:47:22.665580 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:23.667451 kubelet[2069]: E0910 00:47:23.667409 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:23.667861 kubelet[2069]: E0910 00:47:23.667580 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:47:28.051953 systemd[1]: Started sshd@5-10.0.0.93:22-10.0.0.1:56230.service. Sep 10 00:47:28.099485 sshd[3448]: Accepted publickey for core from 10.0.0.1 port 56230 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:47:28.101205 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:47:28.106147 systemd-logind[1293]: New session 6 of user core. Sep 10 00:47:28.106981 systemd[1]: Started session-6.scope. Sep 10 00:47:28.298419 sshd[3448]: pam_unix(sshd:session): session closed for user core Sep 10 00:47:28.301092 systemd[1]: sshd@5-10.0.0.93:22-10.0.0.1:56230.service: Deactivated successfully. Sep 10 00:47:28.302156 systemd-logind[1293]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:47:28.302166 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:47:28.303138 systemd-logind[1293]: Removed session 6. Sep 10 00:47:33.302131 systemd[1]: Started sshd@6-10.0.0.93:22-10.0.0.1:35418.service. Sep 10 00:47:33.341175 sshd[3465]: Accepted publickey for core from 10.0.0.1 port 35418 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:47:33.342818 sshd[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:47:33.346798 systemd-logind[1293]: New session 7 of user core. Sep 10 00:47:33.347531 systemd[1]: Started session-7.scope. Sep 10 00:47:33.485464 sshd[3465]: pam_unix(sshd:session): session closed for user core Sep 10 00:47:33.488314 systemd[1]: sshd@6-10.0.0.93:22-10.0.0.1:35418.service: Deactivated successfully. Sep 10 00:47:33.489327 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:47:33.489335 systemd-logind[1293]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:47:33.490304 systemd-logind[1293]: Removed session 7. Sep 10 00:47:38.489088 systemd[1]: Started sshd@7-10.0.0.93:22-10.0.0.1:35422.service. Sep 10 00:47:38.525595 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 35422 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:47:38.527115 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:47:38.531685 systemd-logind[1293]: New session 8 of user core. Sep 10 00:47:38.532488 systemd[1]: Started session-8.scope. Sep 10 00:47:38.648717 sshd[3480]: pam_unix(sshd:session): session closed for user core Sep 10 00:47:38.651823 systemd[1]: sshd@7-10.0.0.93:22-10.0.0.1:35422.service: Deactivated successfully. Sep 10 00:47:38.652868 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:47:38.653925 systemd-logind[1293]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:47:38.654687 systemd-logind[1293]: Removed session 8. Sep 10 00:47:43.652776 systemd[1]: Started sshd@8-10.0.0.93:22-10.0.0.1:40644.service. Sep 10 00:47:43.692056 sshd[3495]: Accepted publickey for core from 10.0.0.1 port 40644 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:47:43.693378 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:47:43.696695 systemd-logind[1293]: New session 9 of user core. Sep 10 00:47:43.697593 systemd[1]: Started session-9.scope. Sep 10 00:47:43.813867 sshd[3495]: pam_unix(sshd:session): session closed for user core Sep 10 00:47:43.816308 systemd[1]: sshd@8-10.0.0.93:22-10.0.0.1:40644.service: Deactivated successfully. Sep 10 00:47:43.817335 systemd-logind[1293]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:47:43.817363 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:47:43.818088 systemd-logind[1293]: Removed session 9. Sep 10 00:47:48.817305 systemd[1]: Started sshd@9-10.0.0.93:22-10.0.0.1:40646.service. Sep 10 00:47:48.861703 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 40646 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:47:48.863345 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:47:48.867855 systemd-logind[1293]: New session 10 of user core. Sep 10 00:47:48.868787 systemd[1]: Started session-10.scope. Sep 10 00:47:48.986687 sshd[3510]: pam_unix(sshd:session): session closed for user core Sep 10 00:47:48.989325 systemd[1]: sshd@9-10.0.0.93:22-10.0.0.1:40646.service: Deactivated successfully. Sep 10 00:47:48.990583 systemd-logind[1293]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:47:48.990707 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:47:48.991681 systemd-logind[1293]: Removed session 10. Sep 10 00:47:53.991100 systemd[1]: Started sshd@10-10.0.0.93:22-10.0.0.1:33714.service. Sep 10 00:47:54.027921 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 33714 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:47:54.029415 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:47:54.034069 systemd-logind[1293]: New session 11 of user core. Sep 10 00:47:54.035306 systemd[1]: Started session-11.scope. Sep 10 00:47:54.159050 sshd[3527]: pam_unix(sshd:session): session closed for user core Sep 10 00:47:54.162211 systemd[1]: Started sshd@11-10.0.0.93:22-10.0.0.1:33722.service. Sep 10 00:47:54.162720 systemd[1]: sshd@10-10.0.0.93:22-10.0.0.1:33714.service: Deactivated successfully. Sep 10 00:47:54.163789 systemd-logind[1293]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:47:54.163984 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:47:54.165427 systemd-logind[1293]: Removed session 11. Sep 10 00:47:54.201506 sshd[3540]: Accepted publickey for core from 10.0.0.1 port 33722 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:47:54.203218 sshd[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:47:54.208502 systemd-logind[1293]: New session 12 of user core. Sep 10 00:47:54.209692 systemd[1]: Started session-12.scope. Sep 10 00:47:54.380813 sshd[3540]: pam_unix(sshd:session): session closed for user core Sep 10 00:47:54.385788 systemd[1]: Started sshd@12-10.0.0.93:22-10.0.0.1:33726.service. Sep 10 00:47:54.386860 systemd[1]: sshd@11-10.0.0.93:22-10.0.0.1:33722.service: Deactivated successfully. Sep 10 00:47:54.388396 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:47:54.388664 systemd-logind[1293]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:47:54.405144 systemd-logind[1293]: Removed session 12. Sep 10 00:47:54.437053 sshd[3553]: Accepted publickey for core from 10.0.0.1 port 33726 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:47:54.438637 sshd[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:47:54.443774 systemd-logind[1293]: New session 13 of user core. Sep 10 00:47:54.445206 systemd[1]: Started session-13.scope. Sep 10 00:47:54.562553 sshd[3553]: pam_unix(sshd:session): session closed for user core Sep 10 00:47:54.565600 systemd[1]: sshd@12-10.0.0.93:22-10.0.0.1:33726.service: Deactivated successfully. Sep 10 00:47:54.566769 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:47:54.568019 systemd-logind[1293]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:47:54.569002 systemd-logind[1293]: Removed session 13. Sep 10 00:47:59.567257 systemd[1]: Started sshd@13-10.0.0.93:22-10.0.0.1:33736.service. Sep 10 00:47:59.607730 sshd[3570]: Accepted publickey for core from 10.0.0.1 port 33736 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:47:59.609592 sshd[3570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:47:59.614867 systemd-logind[1293]: New session 14 of user core. Sep 10 00:47:59.616070 systemd[1]: Started session-14.scope. Sep 10 00:47:59.735969 sshd[3570]: pam_unix(sshd:session): session closed for user core Sep 10 00:47:59.738656 systemd[1]: sshd@13-10.0.0.93:22-10.0.0.1:33736.service: Deactivated successfully. Sep 10 00:47:59.739671 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:47:59.740562 systemd-logind[1293]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:47:59.741458 systemd-logind[1293]: Removed session 14. Sep 10 00:48:04.739855 systemd[1]: Started sshd@14-10.0.0.93:22-10.0.0.1:40136.service. Sep 10 00:48:04.818918 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 40136 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:04.820259 sshd[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:04.823982 systemd-logind[1293]: New session 15 of user core. Sep 10 00:48:04.824744 systemd[1]: Started session-15.scope. Sep 10 00:48:04.939508 sshd[3585]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:04.942410 systemd[1]: sshd@14-10.0.0.93:22-10.0.0.1:40136.service: Deactivated successfully. Sep 10 00:48:04.943377 systemd-logind[1293]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:48:04.943431 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:48:04.944243 systemd-logind[1293]: Removed session 15. Sep 10 00:48:09.943460 systemd[1]: Started sshd@15-10.0.0.93:22-10.0.0.1:59330.service. Sep 10 00:48:09.983539 sshd[3599]: Accepted publickey for core from 10.0.0.1 port 59330 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:09.985124 sshd[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:09.989187 systemd-logind[1293]: New session 16 of user core. Sep 10 00:48:09.990021 systemd[1]: Started session-16.scope. Sep 10 00:48:10.104533 sshd[3599]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:10.107491 systemd[1]: Started sshd@16-10.0.0.93:22-10.0.0.1:59336.service. Sep 10 00:48:10.108228 systemd[1]: sshd@15-10.0.0.93:22-10.0.0.1:59330.service: Deactivated successfully. Sep 10 00:48:10.109315 systemd-logind[1293]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:48:10.109326 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:48:10.111376 systemd-logind[1293]: Removed session 16. Sep 10 00:48:10.142927 sshd[3612]: Accepted publickey for core from 10.0.0.1 port 59336 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:10.144203 sshd[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:10.148853 systemd-logind[1293]: New session 17 of user core. Sep 10 00:48:10.149534 systemd[1]: Started session-17.scope. Sep 10 00:48:10.518494 kubelet[2069]: E0910 00:48:10.518429 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:10.703553 sshd[3612]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:10.705808 systemd[1]: Started sshd@17-10.0.0.93:22-10.0.0.1:59350.service. Sep 10 00:48:10.706668 systemd[1]: sshd@16-10.0.0.93:22-10.0.0.1:59336.service: Deactivated successfully. Sep 10 00:48:10.707654 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:48:10.708217 systemd-logind[1293]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:48:10.709072 systemd-logind[1293]: Removed session 17. Sep 10 00:48:10.745242 sshd[3623]: Accepted publickey for core from 10.0.0.1 port 59350 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:10.746263 sshd[3623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:10.749794 systemd-logind[1293]: New session 18 of user core. Sep 10 00:48:10.750632 systemd[1]: Started session-18.scope. Sep 10 00:48:12.182574 sshd[3623]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:12.183057 systemd[1]: Started sshd@18-10.0.0.93:22-10.0.0.1:59356.service. Sep 10 00:48:12.187034 systemd[1]: sshd@17-10.0.0.93:22-10.0.0.1:59350.service: Deactivated successfully. Sep 10 00:48:12.188136 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:48:12.188619 systemd-logind[1293]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:48:12.189586 systemd-logind[1293]: Removed session 18. Sep 10 00:48:12.229325 sshd[3641]: Accepted publickey for core from 10.0.0.1 port 59356 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:12.230566 sshd[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:12.234919 systemd-logind[1293]: New session 19 of user core. Sep 10 00:48:12.235697 systemd[1]: Started session-19.scope. Sep 10 00:48:12.498230 sshd[3641]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:12.501585 systemd[1]: Started sshd@19-10.0.0.93:22-10.0.0.1:59372.service. Sep 10 00:48:12.502218 systemd[1]: sshd@18-10.0.0.93:22-10.0.0.1:59356.service: Deactivated successfully. Sep 10 00:48:12.503708 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:48:12.504798 systemd-logind[1293]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:48:12.506385 systemd-logind[1293]: Removed session 19. Sep 10 00:48:12.519320 kubelet[2069]: E0910 00:48:12.519281 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:12.542375 sshd[3655]: Accepted publickey for core from 10.0.0.1 port 59372 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:12.543722 sshd[3655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:12.547167 systemd-logind[1293]: New session 20 of user core. Sep 10 00:48:12.547948 systemd[1]: Started session-20.scope. Sep 10 00:48:12.676071 sshd[3655]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:12.678402 systemd[1]: sshd@19-10.0.0.93:22-10.0.0.1:59372.service: Deactivated successfully. Sep 10 00:48:12.679371 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:48:12.679489 systemd-logind[1293]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:48:12.680268 systemd-logind[1293]: Removed session 20. Sep 10 00:48:13.518510 kubelet[2069]: E0910 00:48:13.518430 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:17.680309 systemd[1]: Started sshd@20-10.0.0.93:22-10.0.0.1:59384.service. Sep 10 00:48:17.718699 sshd[3672]: Accepted publickey for core from 10.0.0.1 port 59384 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:17.720113 sshd[3672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:17.723934 systemd-logind[1293]: New session 21 of user core. Sep 10 00:48:17.724791 systemd[1]: Started session-21.scope. Sep 10 00:48:17.826251 sshd[3672]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:17.828349 systemd[1]: sshd@20-10.0.0.93:22-10.0.0.1:59384.service: Deactivated successfully. Sep 10 00:48:17.829355 systemd-logind[1293]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:48:17.829368 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:48:17.830104 systemd-logind[1293]: Removed session 21. Sep 10 00:48:22.518921 kubelet[2069]: E0910 00:48:22.518863 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:22.829810 systemd[1]: Started sshd@21-10.0.0.93:22-10.0.0.1:56302.service. Sep 10 00:48:22.867192 sshd[3690]: Accepted publickey for core from 10.0.0.1 port 56302 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:22.868641 sshd[3690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:22.872690 systemd-logind[1293]: New session 22 of user core. Sep 10 00:48:22.873467 systemd[1]: Started session-22.scope. Sep 10 00:48:22.982733 sshd[3690]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:22.985378 systemd[1]: sshd@21-10.0.0.93:22-10.0.0.1:56302.service: Deactivated successfully. Sep 10 00:48:22.986440 systemd-logind[1293]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:48:22.986533 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:48:22.987435 systemd-logind[1293]: Removed session 22. Sep 10 00:48:27.986471 systemd[1]: Started sshd@22-10.0.0.93:22-10.0.0.1:56316.service. Sep 10 00:48:28.021636 sshd[3704]: Accepted publickey for core from 10.0.0.1 port 56316 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:28.022888 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:28.026284 systemd-logind[1293]: New session 23 of user core. Sep 10 00:48:28.027292 systemd[1]: Started session-23.scope. Sep 10 00:48:28.143371 sshd[3704]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:28.146461 systemd[1]: sshd@22-10.0.0.93:22-10.0.0.1:56316.service: Deactivated successfully. Sep 10 00:48:28.147850 systemd-logind[1293]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:48:28.147885 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:48:28.148611 systemd-logind[1293]: Removed session 23. Sep 10 00:48:31.518801 kubelet[2069]: E0910 00:48:31.518761 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:31.518801 kubelet[2069]: E0910 00:48:31.518810 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:33.146680 systemd[1]: Started sshd@23-10.0.0.93:22-10.0.0.1:38846.service. Sep 10 00:48:33.182857 sshd[3720]: Accepted publickey for core from 10.0.0.1 port 38846 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:33.183910 sshd[3720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:33.187504 systemd-logind[1293]: New session 24 of user core. Sep 10 00:48:33.188271 systemd[1]: Started session-24.scope. Sep 10 00:48:33.298320 sshd[3720]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:33.301622 systemd[1]: Started sshd@24-10.0.0.93:22-10.0.0.1:38850.service. Sep 10 00:48:33.302209 systemd[1]: sshd@23-10.0.0.93:22-10.0.0.1:38846.service: Deactivated successfully. Sep 10 00:48:33.303529 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:48:33.304029 systemd-logind[1293]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:48:33.305068 systemd-logind[1293]: Removed session 24. Sep 10 00:48:33.340473 sshd[3733]: Accepted publickey for core from 10.0.0.1 port 38850 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:33.342078 sshd[3733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:33.346216 systemd-logind[1293]: New session 25 of user core. Sep 10 00:48:33.347093 systemd[1]: Started session-25.scope. Sep 10 00:48:33.519190 kubelet[2069]: E0910 00:48:33.519066 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:34.692713 env[1317]: time="2025-09-10T00:48:34.692657980Z" level=info msg="StopContainer for \"413cdfbfe2bb7b9b0c19f0397fdb46bcdc16d62700503c2fac1e3428b10d3336\" with timeout 30 (s)" Sep 10 00:48:34.693553 env[1317]: time="2025-09-10T00:48:34.693506929Z" level=info msg="Stop container \"413cdfbfe2bb7b9b0c19f0397fdb46bcdc16d62700503c2fac1e3428b10d3336\" with signal terminated" Sep 10 00:48:34.709765 systemd[1]: run-containerd-runc-k8s.io-d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038-runc.nzusAk.mount: Deactivated successfully. Sep 10 00:48:34.726998 env[1317]: time="2025-09-10T00:48:34.726916042Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:48:34.730255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-413cdfbfe2bb7b9b0c19f0397fdb46bcdc16d62700503c2fac1e3428b10d3336-rootfs.mount: Deactivated successfully. Sep 10 00:48:34.734348 env[1317]: time="2025-09-10T00:48:34.734302558Z" level=info msg="StopContainer for \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\" with timeout 2 (s)" Sep 10 00:48:34.734596 env[1317]: time="2025-09-10T00:48:34.734556419Z" level=info msg="Stop container \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\" with signal terminated" Sep 10 00:48:34.740829 systemd-networkd[1076]: lxc_health: Link DOWN Sep 10 00:48:34.740837 systemd-networkd[1076]: lxc_health: Lost carrier Sep 10 00:48:34.746427 env[1317]: time="2025-09-10T00:48:34.746367559Z" level=info msg="shim disconnected" id=413cdfbfe2bb7b9b0c19f0397fdb46bcdc16d62700503c2fac1e3428b10d3336 Sep 10 00:48:34.746427 env[1317]: time="2025-09-10T00:48:34.746408307Z" level=warning msg="cleaning up after shim disconnected" id=413cdfbfe2bb7b9b0c19f0397fdb46bcdc16d62700503c2fac1e3428b10d3336 namespace=k8s.io Sep 10 00:48:34.746427 env[1317]: time="2025-09-10T00:48:34.746417123Z" level=info msg="cleaning up dead shim" Sep 10 00:48:34.757313 env[1317]: time="2025-09-10T00:48:34.757252564Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:48:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3790 runtime=io.containerd.runc.v2\n" Sep 10 00:48:34.760489 env[1317]: time="2025-09-10T00:48:34.760441737Z" level=info msg="StopContainer for \"413cdfbfe2bb7b9b0c19f0397fdb46bcdc16d62700503c2fac1e3428b10d3336\" returns successfully" Sep 10 00:48:34.761127 env[1317]: time="2025-09-10T00:48:34.761092330Z" level=info msg="StopPodSandbox for \"ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd\"" Sep 10 00:48:34.761199 env[1317]: time="2025-09-10T00:48:34.761152654Z" level=info msg="Container to stop \"413cdfbfe2bb7b9b0c19f0397fdb46bcdc16d62700503c2fac1e3428b10d3336\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:48:34.763922 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd-shm.mount: Deactivated successfully. Sep 10 00:48:34.786304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd-rootfs.mount: Deactivated successfully. Sep 10 00:48:34.793641 env[1317]: time="2025-09-10T00:48:34.793591498Z" level=info msg="shim disconnected" id=ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd Sep 10 00:48:34.793641 env[1317]: time="2025-09-10T00:48:34.793638227Z" level=warning msg="cleaning up after shim disconnected" id=ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd namespace=k8s.io Sep 10 00:48:34.793641 env[1317]: time="2025-09-10T00:48:34.793647095Z" level=info msg="cleaning up dead shim" Sep 10 00:48:34.799234 env[1317]: time="2025-09-10T00:48:34.799176231Z" level=info msg="shim disconnected" id=d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038 Sep 10 00:48:34.799492 env[1317]: time="2025-09-10T00:48:34.799474406Z" level=warning msg="cleaning up after shim disconnected" id=d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038 namespace=k8s.io Sep 10 00:48:34.799573 env[1317]: time="2025-09-10T00:48:34.799553947Z" level=info msg="cleaning up dead shim" Sep 10 00:48:34.801867 env[1317]: time="2025-09-10T00:48:34.801828817Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:48:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3837 runtime=io.containerd.runc.v2\n" Sep 10 00:48:34.802954 env[1317]: time="2025-09-10T00:48:34.802879328Z" level=info msg="TearDown network for sandbox \"ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd\" successfully" Sep 10 00:48:34.803012 env[1317]: time="2025-09-10T00:48:34.802955091Z" level=info msg="StopPodSandbox for \"ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd\" returns successfully" Sep 10 00:48:34.806699 env[1317]: time="2025-09-10T00:48:34.806662706Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:48:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3849 runtime=io.containerd.runc.v2\n" Sep 10 00:48:34.809971 env[1317]: time="2025-09-10T00:48:34.809931120Z" level=info msg="StopContainer for \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\" returns successfully" Sep 10 00:48:34.810451 env[1317]: time="2025-09-10T00:48:34.810424474Z" level=info msg="StopPodSandbox for \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\"" Sep 10 00:48:34.810610 env[1317]: time="2025-09-10T00:48:34.810564760Z" level=info msg="Container to stop \"bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:48:34.810610 env[1317]: time="2025-09-10T00:48:34.810593975Z" level=info msg="Container to stop \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:48:34.810718 env[1317]: time="2025-09-10T00:48:34.810610797Z" level=info msg="Container to stop \"618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:48:34.810718 env[1317]: time="2025-09-10T00:48:34.810632337Z" level=info msg="Container to stop \"4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:48:34.810718 env[1317]: time="2025-09-10T00:48:34.810645964Z" level=info msg="Container to stop \"2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:48:34.832511 env[1317]: time="2025-09-10T00:48:34.832453805Z" level=info msg="shim disconnected" id=38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205 Sep 10 00:48:34.832511 env[1317]: time="2025-09-10T00:48:34.832509831Z" level=warning msg="cleaning up after shim disconnected" id=38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205 namespace=k8s.io Sep 10 00:48:34.832741 env[1317]: time="2025-09-10T00:48:34.832520542Z" level=info msg="cleaning up dead shim" Sep 10 00:48:34.839925 env[1317]: time="2025-09-10T00:48:34.839849559Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:48:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3881 runtime=io.containerd.runc.v2\n" Sep 10 00:48:34.840518 env[1317]: time="2025-09-10T00:48:34.840488379Z" level=info msg="TearDown network for sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" successfully" Sep 10 00:48:34.840518 env[1317]: time="2025-09-10T00:48:34.840514047Z" level=info msg="StopPodSandbox for \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" returns successfully" Sep 10 00:48:34.969955 kubelet[2069]: I0910 00:48:34.968976 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5b75624-00a0-4562-8f5e-1120484bbc42-cilium-config-path\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.969955 kubelet[2069]: I0910 00:48:34.969031 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-bpf-maps\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.969955 kubelet[2069]: I0910 00:48:34.969060 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dd5k\" (UniqueName: \"kubernetes.io/projected/3d1368e8-7260-453c-a28a-fb897824542d-kube-api-access-9dd5k\") pod \"3d1368e8-7260-453c-a28a-fb897824542d\" (UID: \"3d1368e8-7260-453c-a28a-fb897824542d\") " Sep 10 00:48:34.969955 kubelet[2069]: I0910 00:48:34.969079 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-lib-modules\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.969955 kubelet[2069]: I0910 00:48:34.969101 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5b75624-00a0-4562-8f5e-1120484bbc42-clustermesh-secrets\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.969955 kubelet[2069]: I0910 00:48:34.969120 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d1368e8-7260-453c-a28a-fb897824542d-cilium-config-path\") pod \"3d1368e8-7260-453c-a28a-fb897824542d\" (UID: \"3d1368e8-7260-453c-a28a-fb897824542d\") " Sep 10 00:48:34.970611 kubelet[2069]: I0910 00:48:34.969139 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-host-proc-sys-kernel\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.970611 kubelet[2069]: I0910 00:48:34.969157 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-host-proc-sys-net\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.970611 kubelet[2069]: I0910 00:48:34.969174 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-xtables-lock\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.970611 kubelet[2069]: I0910 00:48:34.969205 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-cilium-cgroup\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.970611 kubelet[2069]: I0910 00:48:34.969175 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:34.970611 kubelet[2069]: I0910 00:48:34.969228 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5b75624-00a0-4562-8f5e-1120484bbc42-hubble-tls\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.970828 kubelet[2069]: I0910 00:48:34.969248 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-cilium-run\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.970828 kubelet[2069]: I0910 00:48:34.969266 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-hostproc\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.970828 kubelet[2069]: I0910 00:48:34.969265 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:34.970828 kubelet[2069]: I0910 00:48:34.969307 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:34.970828 kubelet[2069]: I0910 00:48:34.969322 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:34.971039 kubelet[2069]: I0910 00:48:34.969338 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:34.971039 kubelet[2069]: I0910 00:48:34.969351 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:34.971039 kubelet[2069]: I0910 00:48:34.969562 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:34.971039 kubelet[2069]: I0910 00:48:34.970152 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:34.971039 kubelet[2069]: I0910 00:48:34.970184 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-hostproc" (OuterVolumeSpecName: "hostproc") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:34.971474 kubelet[2069]: I0910 00:48:34.969284 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-etc-cni-netd\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.971616 kubelet[2069]: I0910 00:48:34.971592 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-cni-path\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.971755 kubelet[2069]: I0910 00:48:34.971732 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hszql\" (UniqueName: \"kubernetes.io/projected/f5b75624-00a0-4562-8f5e-1120484bbc42-kube-api-access-hszql\") pod \"f5b75624-00a0-4562-8f5e-1120484bbc42\" (UID: \"f5b75624-00a0-4562-8f5e-1120484bbc42\") " Sep 10 00:48:34.971910 kubelet[2069]: I0910 00:48:34.971873 2069 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:34.972021 kubelet[2069]: I0910 00:48:34.972001 2069 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:34.972128 kubelet[2069]: I0910 00:48:34.972108 2069 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:34.972258 kubelet[2069]: I0910 00:48:34.972238 2069 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:34.972364 kubelet[2069]: I0910 00:48:34.972345 2069 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:34.972469 kubelet[2069]: I0910 00:48:34.972450 2069 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:34.972574 kubelet[2069]: I0910 00:48:34.972555 2069 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:34.972685 kubelet[2069]: I0910 00:48:34.972665 2069 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:34.972787 kubelet[2069]: I0910 00:48:34.972768 2069 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:34.972993 kubelet[2069]: I0910 00:48:34.972969 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-cni-path" (OuterVolumeSpecName: "cni-path") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:34.973091 kubelet[2069]: I0910 00:48:34.973054 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d1368e8-7260-453c-a28a-fb897824542d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3d1368e8-7260-453c-a28a-fb897824542d" (UID: "3d1368e8-7260-453c-a28a-fb897824542d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:48:34.973175 kubelet[2069]: I0910 00:48:34.973157 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b75624-00a0-4562-8f5e-1120484bbc42-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:48:34.974017 kubelet[2069]: I0910 00:48:34.973993 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5b75624-00a0-4562-8f5e-1120484bbc42-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:48:34.974357 kubelet[2069]: I0910 00:48:34.974331 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d1368e8-7260-453c-a28a-fb897824542d-kube-api-access-9dd5k" (OuterVolumeSpecName: "kube-api-access-9dd5k") pod "3d1368e8-7260-453c-a28a-fb897824542d" (UID: "3d1368e8-7260-453c-a28a-fb897824542d"). InnerVolumeSpecName "kube-api-access-9dd5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:48:34.975443 kubelet[2069]: I0910 00:48:34.975415 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b75624-00a0-4562-8f5e-1120484bbc42-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:48:34.975533 kubelet[2069]: I0910 00:48:34.975445 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b75624-00a0-4562-8f5e-1120484bbc42-kube-api-access-hszql" (OuterVolumeSpecName: "kube-api-access-hszql") pod "f5b75624-00a0-4562-8f5e-1120484bbc42" (UID: "f5b75624-00a0-4562-8f5e-1120484bbc42"). InnerVolumeSpecName "kube-api-access-hszql". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:48:35.073425 kubelet[2069]: I0910 00:48:35.073367 2069 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dd5k\" (UniqueName: \"kubernetes.io/projected/3d1368e8-7260-453c-a28a-fb897824542d-kube-api-access-9dd5k\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:35.073425 kubelet[2069]: I0910 00:48:35.073410 2069 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5b75624-00a0-4562-8f5e-1120484bbc42-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:35.073425 kubelet[2069]: I0910 00:48:35.073422 2069 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d1368e8-7260-453c-a28a-fb897824542d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:35.073425 kubelet[2069]: I0910 00:48:35.073432 2069 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5b75624-00a0-4562-8f5e-1120484bbc42-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:35.073425 kubelet[2069]: I0910 00:48:35.073443 2069 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5b75624-00a0-4562-8f5e-1120484bbc42-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:35.073810 kubelet[2069]: I0910 00:48:35.073453 2069 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hszql\" (UniqueName: \"kubernetes.io/projected/f5b75624-00a0-4562-8f5e-1120484bbc42-kube-api-access-hszql\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:35.073810 kubelet[2069]: I0910 00:48:35.073463 2069 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5b75624-00a0-4562-8f5e-1120484bbc42-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:35.705792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038-rootfs.mount: Deactivated successfully. Sep 10 00:48:35.706011 systemd[1]: var-lib-kubelet-pods-3d1368e8\x2d7260\x2d453c\x2da28a\x2dfb897824542d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9dd5k.mount: Deactivated successfully. Sep 10 00:48:35.706150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205-rootfs.mount: Deactivated successfully. Sep 10 00:48:35.706323 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205-shm.mount: Deactivated successfully. Sep 10 00:48:35.706473 systemd[1]: var-lib-kubelet-pods-f5b75624\x2d00a0\x2d4562\x2d8f5e\x2d1120484bbc42-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhszql.mount: Deactivated successfully. Sep 10 00:48:35.706630 systemd[1]: var-lib-kubelet-pods-f5b75624\x2d00a0\x2d4562\x2d8f5e\x2d1120484bbc42-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 00:48:35.706773 systemd[1]: var-lib-kubelet-pods-f5b75624\x2d00a0\x2d4562\x2d8f5e\x2d1120484bbc42-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 00:48:35.801549 kubelet[2069]: I0910 00:48:35.801506 2069 scope.go:117] "RemoveContainer" containerID="413cdfbfe2bb7b9b0c19f0397fdb46bcdc16d62700503c2fac1e3428b10d3336" Sep 10 00:48:35.803536 env[1317]: time="2025-09-10T00:48:35.803161079Z" level=info msg="RemoveContainer for \"413cdfbfe2bb7b9b0c19f0397fdb46bcdc16d62700503c2fac1e3428b10d3336\"" Sep 10 00:48:35.807603 env[1317]: time="2025-09-10T00:48:35.807547830Z" level=info msg="RemoveContainer for \"413cdfbfe2bb7b9b0c19f0397fdb46bcdc16d62700503c2fac1e3428b10d3336\" returns successfully" Sep 10 00:48:35.807874 kubelet[2069]: I0910 00:48:35.807813 2069 scope.go:117] "RemoveContainer" containerID="d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038" Sep 10 00:48:35.809147 env[1317]: time="2025-09-10T00:48:35.809097045Z" level=info msg="RemoveContainer for \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\"" Sep 10 00:48:35.815216 env[1317]: time="2025-09-10T00:48:35.813611817Z" level=info msg="RemoveContainer for \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\" returns successfully" Sep 10 00:48:35.815216 env[1317]: time="2025-09-10T00:48:35.814998233Z" level=info msg="RemoveContainer for \"bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec\"" Sep 10 00:48:35.815427 kubelet[2069]: I0910 00:48:35.813842 2069 scope.go:117] "RemoveContainer" containerID="bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec" Sep 10 00:48:35.819085 env[1317]: time="2025-09-10T00:48:35.819038879Z" level=info msg="RemoveContainer for \"bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec\" returns successfully" Sep 10 00:48:35.819531 kubelet[2069]: I0910 00:48:35.819491 2069 scope.go:117] "RemoveContainer" containerID="2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c" Sep 10 00:48:35.820789 env[1317]: time="2025-09-10T00:48:35.820721136Z" level=info msg="RemoveContainer for \"2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c\"" Sep 10 00:48:35.824208 env[1317]: time="2025-09-10T00:48:35.824157324Z" level=info msg="RemoveContainer for \"2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c\" returns successfully" Sep 10 00:48:35.824385 kubelet[2069]: I0910 00:48:35.824360 2069 scope.go:117] "RemoveContainer" containerID="4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111" Sep 10 00:48:35.825619 env[1317]: time="2025-09-10T00:48:35.825550984Z" level=info msg="RemoveContainer for \"4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111\"" Sep 10 00:48:35.829124 env[1317]: time="2025-09-10T00:48:35.829082104Z" level=info msg="RemoveContainer for \"4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111\" returns successfully" Sep 10 00:48:35.829370 kubelet[2069]: I0910 00:48:35.829272 2069 scope.go:117] "RemoveContainer" containerID="618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1" Sep 10 00:48:35.830610 env[1317]: time="2025-09-10T00:48:35.830210302Z" level=info msg="RemoveContainer for \"618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1\"" Sep 10 00:48:35.833755 env[1317]: time="2025-09-10T00:48:35.833688471Z" level=info msg="RemoveContainer for \"618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1\" returns successfully" Sep 10 00:48:35.834045 kubelet[2069]: I0910 00:48:35.834004 2069 scope.go:117] "RemoveContainer" containerID="d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038" Sep 10 00:48:35.834406 env[1317]: time="2025-09-10T00:48:35.834336007Z" level=error msg="ContainerStatus for \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\": not found" Sep 10 00:48:35.834542 kubelet[2069]: E0910 00:48:35.834518 2069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\": not found" containerID="d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038" Sep 10 00:48:35.834637 kubelet[2069]: I0910 00:48:35.834558 2069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038"} err="failed to get container status \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\": rpc error: code = NotFound desc = an error occurred when try to find container \"d883a04f5c8282917dac4155ee43c2593f26a85653f473913f429f7a30e7e038\": not found" Sep 10 00:48:35.834668 kubelet[2069]: I0910 00:48:35.834638 2069 scope.go:117] "RemoveContainer" containerID="bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec" Sep 10 00:48:35.834842 env[1317]: time="2025-09-10T00:48:35.834792151Z" level=error msg="ContainerStatus for \"bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec\": not found" Sep 10 00:48:35.835132 kubelet[2069]: E0910 00:48:35.835108 2069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec\": not found" containerID="bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec" Sep 10 00:48:35.835211 kubelet[2069]: I0910 00:48:35.835130 2069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec"} err="failed to get container status \"bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc328b8474bc0f81712d68adac8a85e4b0fd4826e07a2b9a16ca373367fe91ec\": not found" Sep 10 00:48:35.835211 kubelet[2069]: I0910 00:48:35.835149 2069 scope.go:117] "RemoveContainer" containerID="2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c" Sep 10 00:48:35.835336 env[1317]: time="2025-09-10T00:48:35.835300214Z" level=error msg="ContainerStatus for \"2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c\": not found" Sep 10 00:48:35.835467 kubelet[2069]: E0910 00:48:35.835442 2069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c\": not found" containerID="2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c" Sep 10 00:48:35.835514 kubelet[2069]: I0910 00:48:35.835475 2069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c"} err="failed to get container status \"2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"2eb89844b05c7af631fd596f5ca1eb8e978ac729a9f2e706958f986f37f46e3c\": not found" Sep 10 00:48:35.835514 kubelet[2069]: I0910 00:48:35.835498 2069 scope.go:117] "RemoveContainer" containerID="4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111" Sep 10 00:48:35.835737 env[1317]: time="2025-09-10T00:48:35.835669373Z" level=error msg="ContainerStatus for \"4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111\": not found" Sep 10 00:48:35.835846 kubelet[2069]: E0910 00:48:35.835828 2069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111\": not found" containerID="4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111" Sep 10 00:48:35.835924 kubelet[2069]: I0910 00:48:35.835846 2069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111"} err="failed to get container status \"4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c2965b65a23372e1954564a7bc5c0d7c2ca8ec95644db5629adca687d9dd111\": not found" Sep 10 00:48:35.835924 kubelet[2069]: I0910 00:48:35.835859 2069 scope.go:117] "RemoveContainer" containerID="618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1" Sep 10 00:48:35.836035 env[1317]: time="2025-09-10T00:48:35.835999368Z" level=error msg="ContainerStatus for \"618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1\": not found" Sep 10 00:48:35.836158 kubelet[2069]: E0910 00:48:35.836138 2069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1\": not found" containerID="618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1" Sep 10 00:48:35.836274 kubelet[2069]: I0910 00:48:35.836166 2069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1"} err="failed to get container status \"618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"618ccc0121adffa38c8559bdbbb9104055beccc730eba5764f00a65f537601a1\": not found" Sep 10 00:48:36.658194 sshd[3733]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:36.661543 systemd[1]: Started sshd@25-10.0.0.93:22-10.0.0.1:38858.service. Sep 10 00:48:36.662299 systemd[1]: sshd@24-10.0.0.93:22-10.0.0.1:38850.service: Deactivated successfully. Sep 10 00:48:36.664444 systemd-logind[1293]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:48:36.664467 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:48:36.665701 systemd-logind[1293]: Removed session 25. Sep 10 00:48:36.704687 sshd[3899]: Accepted publickey for core from 10.0.0.1 port 38858 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:36.706619 sshd[3899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:36.711278 systemd-logind[1293]: New session 26 of user core. Sep 10 00:48:36.712033 systemd[1]: Started session-26.scope. Sep 10 00:48:37.393362 sshd[3899]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:37.395875 systemd[1]: Started sshd@26-10.0.0.93:22-10.0.0.1:38864.service. Sep 10 00:48:37.411230 systemd[1]: sshd@25-10.0.0.93:22-10.0.0.1:38858.service: Deactivated successfully. Sep 10 00:48:37.414027 systemd-logind[1293]: Session 26 logged out. Waiting for processes to exit. Sep 10 00:48:37.414829 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 00:48:37.417739 systemd-logind[1293]: Removed session 26. Sep 10 00:48:37.421084 kubelet[2069]: E0910 00:48:37.420985 2069 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5b75624-00a0-4562-8f5e-1120484bbc42" containerName="apply-sysctl-overwrites" Sep 10 00:48:37.421084 kubelet[2069]: E0910 00:48:37.421071 2069 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5b75624-00a0-4562-8f5e-1120484bbc42" containerName="mount-bpf-fs" Sep 10 00:48:37.421575 kubelet[2069]: E0910 00:48:37.421126 2069 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5b75624-00a0-4562-8f5e-1120484bbc42" containerName="mount-cgroup" Sep 10 00:48:37.421575 kubelet[2069]: E0910 00:48:37.421142 2069 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d1368e8-7260-453c-a28a-fb897824542d" containerName="cilium-operator" Sep 10 00:48:37.421575 kubelet[2069]: E0910 00:48:37.421154 2069 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5b75624-00a0-4562-8f5e-1120484bbc42" containerName="clean-cilium-state" Sep 10 00:48:37.421575 kubelet[2069]: E0910 00:48:37.421163 2069 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5b75624-00a0-4562-8f5e-1120484bbc42" containerName="cilium-agent" Sep 10 00:48:37.421575 kubelet[2069]: I0910 00:48:37.421246 2069 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b75624-00a0-4562-8f5e-1120484bbc42" containerName="cilium-agent" Sep 10 00:48:37.421575 kubelet[2069]: I0910 00:48:37.421260 2069 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d1368e8-7260-453c-a28a-fb897824542d" containerName="cilium-operator" Sep 10 00:48:37.458624 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 38864 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:37.459946 sshd[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:37.463839 systemd-logind[1293]: New session 27 of user core. Sep 10 00:48:37.465384 systemd[1]: Started session-27.scope. Sep 10 00:48:37.492307 kubelet[2069]: I0910 00:48:37.492239 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-host-proc-sys-kernel\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492307 kubelet[2069]: I0910 00:48:37.492292 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cni-path\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492307 kubelet[2069]: I0910 00:48:37.492318 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-ipsec-secrets\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492566 kubelet[2069]: I0910 00:48:37.492332 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-hubble-tls\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492566 kubelet[2069]: I0910 00:48:37.492349 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-bpf-maps\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492566 kubelet[2069]: I0910 00:48:37.492364 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-lib-modules\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492566 kubelet[2069]: I0910 00:48:37.492381 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-clustermesh-secrets\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492566 kubelet[2069]: I0910 00:48:37.492400 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-host-proc-sys-net\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492566 kubelet[2069]: I0910 00:48:37.492422 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-run\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492702 kubelet[2069]: I0910 00:48:37.492480 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-config-path\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492702 kubelet[2069]: I0910 00:48:37.492522 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-xtables-lock\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492702 kubelet[2069]: I0910 00:48:37.492541 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-cgroup\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492702 kubelet[2069]: I0910 00:48:37.492560 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbj4q\" (UniqueName: \"kubernetes.io/projected/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-kube-api-access-bbj4q\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492702 kubelet[2069]: I0910 00:48:37.492576 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-hostproc\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.492702 kubelet[2069]: I0910 00:48:37.492594 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-etc-cni-netd\") pod \"cilium-v59p5\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " pod="kube-system/cilium-v59p5" Sep 10 00:48:37.521121 kubelet[2069]: I0910 00:48:37.521045 2069 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d1368e8-7260-453c-a28a-fb897824542d" path="/var/lib/kubelet/pods/3d1368e8-7260-453c-a28a-fb897824542d/volumes" Sep 10 00:48:37.521725 kubelet[2069]: I0910 00:48:37.521652 2069 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5b75624-00a0-4562-8f5e-1120484bbc42" path="/var/lib/kubelet/pods/f5b75624-00a0-4562-8f5e-1120484bbc42/volumes" Sep 10 00:48:37.628330 sshd[3914]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:37.630829 systemd[1]: Started sshd@27-10.0.0.93:22-10.0.0.1:38868.service. Sep 10 00:48:37.636664 systemd[1]: sshd@26-10.0.0.93:22-10.0.0.1:38864.service: Deactivated successfully. Sep 10 00:48:37.637973 systemd-logind[1293]: Session 27 logged out. Waiting for processes to exit. Sep 10 00:48:37.638015 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 00:48:37.642832 systemd-logind[1293]: Removed session 27. Sep 10 00:48:37.644301 kubelet[2069]: E0910 00:48:37.644256 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:37.645854 env[1317]: time="2025-09-10T00:48:37.645377227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v59p5,Uid:b9a35441-4fbe-4ba3-ad1e-21166c9e891f,Namespace:kube-system,Attempt:0,}" Sep 10 00:48:37.670295 env[1317]: time="2025-09-10T00:48:37.670207606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:48:37.670539 env[1317]: time="2025-09-10T00:48:37.670255897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:48:37.670539 env[1317]: time="2025-09-10T00:48:37.670267539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:48:37.672319 env[1317]: time="2025-09-10T00:48:37.670726248Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89 pid=3945 runtime=io.containerd.runc.v2 Sep 10 00:48:37.683163 sshd[3933]: Accepted publickey for core from 10.0.0.1 port 38868 ssh2: RSA SHA256:zOOwxZ2DaFRBK4LsECHIj7aq8TnAuoQ7zSNOZYk1iz8 Sep 10 00:48:37.685626 sshd[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:48:37.693251 systemd[1]: Started session-28.scope. Sep 10 00:48:37.693977 systemd-logind[1293]: New session 28 of user core. Sep 10 00:48:37.711989 env[1317]: time="2025-09-10T00:48:37.711925193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v59p5,Uid:b9a35441-4fbe-4ba3-ad1e-21166c9e891f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89\"" Sep 10 00:48:37.713116 kubelet[2069]: E0910 00:48:37.713086 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:37.715719 env[1317]: time="2025-09-10T00:48:37.715662489Z" level=info msg="CreateContainer within sandbox \"2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:48:37.730275 env[1317]: time="2025-09-10T00:48:37.730177355Z" level=info msg="CreateContainer within sandbox \"2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\"" Sep 10 00:48:37.731143 env[1317]: time="2025-09-10T00:48:37.731103088Z" level=info msg="StartContainer for \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\"" Sep 10 00:48:37.791407 env[1317]: time="2025-09-10T00:48:37.791323622Z" level=info msg="StartContainer for \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\" returns successfully" Sep 10 00:48:37.813198 env[1317]: time="2025-09-10T00:48:37.813137318Z" level=info msg="StopContainer for \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\" with timeout 2 (s)" Sep 10 00:48:37.814799 env[1317]: time="2025-09-10T00:48:37.814765051Z" level=info msg="Stop container \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\" with signal terminated" Sep 10 00:48:37.845318 env[1317]: time="2025-09-10T00:48:37.845258393Z" level=info msg="shim disconnected" id=5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620 Sep 10 00:48:37.845318 env[1317]: time="2025-09-10T00:48:37.845305592Z" level=warning msg="cleaning up after shim disconnected" id=5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620 namespace=k8s.io Sep 10 00:48:37.845318 env[1317]: time="2025-09-10T00:48:37.845314930Z" level=info msg="cleaning up dead shim" Sep 10 00:48:37.852200 env[1317]: time="2025-09-10T00:48:37.852116510Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:48:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4042 runtime=io.containerd.runc.v2\n" Sep 10 00:48:37.855357 env[1317]: time="2025-09-10T00:48:37.855302963Z" level=info msg="StopContainer for \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\" returns successfully" Sep 10 00:48:37.855939 env[1317]: time="2025-09-10T00:48:37.855911355Z" level=info msg="StopPodSandbox for \"2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89\"" Sep 10 00:48:37.856026 env[1317]: time="2025-09-10T00:48:37.855969646Z" level=info msg="Container to stop \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:48:37.884874 env[1317]: time="2025-09-10T00:48:37.884803706Z" level=info msg="shim disconnected" id=2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89 Sep 10 00:48:37.884874 env[1317]: time="2025-09-10T00:48:37.884869731Z" level=warning msg="cleaning up after shim disconnected" id=2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89 namespace=k8s.io Sep 10 00:48:37.884874 env[1317]: time="2025-09-10T00:48:37.884882384Z" level=info msg="cleaning up dead shim" Sep 10 00:48:37.893416 env[1317]: time="2025-09-10T00:48:37.893341052Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:48:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4074 runtime=io.containerd.runc.v2\n" Sep 10 00:48:37.893773 env[1317]: time="2025-09-10T00:48:37.893741480Z" level=info msg="TearDown network for sandbox \"2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89\" successfully" Sep 10 00:48:37.893839 env[1317]: time="2025-09-10T00:48:37.893770585Z" level=info msg="StopPodSandbox for \"2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89\" returns successfully" Sep 10 00:48:37.997131 kubelet[2069]: I0910 00:48:37.996372 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-host-proc-sys-kernel\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997131 kubelet[2069]: I0910 00:48:37.996449 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-ipsec-secrets\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997131 kubelet[2069]: I0910 00:48:37.996475 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-lib-modules\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997131 kubelet[2069]: I0910 00:48:37.996501 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-clustermesh-secrets\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997131 kubelet[2069]: I0910 00:48:37.996525 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-bpf-maps\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997131 kubelet[2069]: I0910 00:48:37.996545 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-xtables-lock\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997533 kubelet[2069]: I0910 00:48:37.996536 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:37.997533 kubelet[2069]: I0910 00:48:37.996567 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-hostproc\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997533 kubelet[2069]: I0910 00:48:37.996605 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-hostproc" (OuterVolumeSpecName: "hostproc") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:37.997533 kubelet[2069]: I0910 00:48:37.996652 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-hubble-tls\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997533 kubelet[2069]: I0910 00:48:37.996681 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-host-proc-sys-net\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997708 kubelet[2069]: I0910 00:48:37.996950 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:37.997708 kubelet[2069]: I0910 00:48:37.997026 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-config-path\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997708 kubelet[2069]: I0910 00:48:37.997057 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbj4q\" (UniqueName: \"kubernetes.io/projected/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-kube-api-access-bbj4q\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997708 kubelet[2069]: I0910 00:48:37.997236 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-etc-cni-netd\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997708 kubelet[2069]: I0910 00:48:37.997258 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cni-path\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997708 kubelet[2069]: I0910 00:48:37.997277 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-run\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997967 kubelet[2069]: I0910 00:48:37.997331 2069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-cgroup\") pod \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\" (UID: \"b9a35441-4fbe-4ba3-ad1e-21166c9e891f\") " Sep 10 00:48:37.997967 kubelet[2069]: I0910 00:48:37.997381 2069 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:37.997967 kubelet[2069]: I0910 00:48:37.997413 2069 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:37.997967 kubelet[2069]: I0910 00:48:37.997441 2069 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:37.997967 kubelet[2069]: I0910 00:48:37.997471 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:37.997967 kubelet[2069]: I0910 00:48:37.997492 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:37.999032 kubelet[2069]: I0910 00:48:37.998989 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:37.999113 kubelet[2069]: I0910 00:48:37.999038 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:37.999217 kubelet[2069]: I0910 00:48:37.999180 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:37.999358 kubelet[2069]: I0910 00:48:37.999335 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cni-path" (OuterVolumeSpecName: "cni-path") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:37.999489 kubelet[2069]: I0910 00:48:37.999466 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:48:38.000306 kubelet[2069]: I0910 00:48:38.000279 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:48:38.000498 kubelet[2069]: I0910 00:48:38.000464 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:48:38.001227 kubelet[2069]: I0910 00:48:38.001187 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:48:38.002321 kubelet[2069]: I0910 00:48:38.002294 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:48:38.002842 kubelet[2069]: I0910 00:48:38.002794 2069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-kube-api-access-bbj4q" (OuterVolumeSpecName: "kube-api-access-bbj4q") pod "b9a35441-4fbe-4ba3-ad1e-21166c9e891f" (UID: "b9a35441-4fbe-4ba3-ad1e-21166c9e891f"). InnerVolumeSpecName "kube-api-access-bbj4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:48:38.098183 kubelet[2069]: I0910 00:48:38.098127 2069 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:38.098183 kubelet[2069]: I0910 00:48:38.098178 2069 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:38.098183 kubelet[2069]: I0910 00:48:38.098193 2069 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:38.098458 kubelet[2069]: I0910 00:48:38.098205 2069 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:38.098458 kubelet[2069]: I0910 00:48:38.098217 2069 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:38.098458 kubelet[2069]: I0910 00:48:38.098234 2069 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:38.098458 kubelet[2069]: I0910 00:48:38.098245 2069 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:38.098458 kubelet[2069]: I0910 00:48:38.098255 2069 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:38.098458 kubelet[2069]: I0910 00:48:38.098266 2069 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbj4q\" (UniqueName: \"kubernetes.io/projected/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-kube-api-access-bbj4q\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:38.098458 kubelet[2069]: I0910 00:48:38.098279 2069 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:38.098458 kubelet[2069]: I0910 00:48:38.098289 2069 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:38.098811 kubelet[2069]: I0910 00:48:38.098299 2069 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9a35441-4fbe-4ba3-ad1e-21166c9e891f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:48:38.583946 kubelet[2069]: E0910 00:48:38.583875 2069 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 00:48:38.599693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89-rootfs.mount: Deactivated successfully. Sep 10 00:48:38.599859 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89-shm.mount: Deactivated successfully. Sep 10 00:48:38.599978 systemd[1]: var-lib-kubelet-pods-b9a35441\x2d4fbe\x2d4ba3\x2dad1e\x2d21166c9e891f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbbj4q.mount: Deactivated successfully. Sep 10 00:48:38.600072 systemd[1]: var-lib-kubelet-pods-b9a35441\x2d4fbe\x2d4ba3\x2dad1e\x2d21166c9e891f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 00:48:38.600156 systemd[1]: var-lib-kubelet-pods-b9a35441\x2d4fbe\x2d4ba3\x2dad1e\x2d21166c9e891f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 00:48:38.600252 systemd[1]: var-lib-kubelet-pods-b9a35441\x2d4fbe\x2d4ba3\x2dad1e\x2d21166c9e891f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 10 00:48:38.816295 kubelet[2069]: I0910 00:48:38.816241 2069 scope.go:117] "RemoveContainer" containerID="5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620" Sep 10 00:48:38.818041 env[1317]: time="2025-09-10T00:48:38.817604645Z" level=info msg="RemoveContainer for \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\"" Sep 10 00:48:38.822225 env[1317]: time="2025-09-10T00:48:38.822128861Z" level=info msg="RemoveContainer for \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\" returns successfully" Sep 10 00:48:38.823048 env[1317]: time="2025-09-10T00:48:38.822655288Z" level=error msg="ContainerStatus for \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\": not found" Sep 10 00:48:38.823128 kubelet[2069]: I0910 00:48:38.822391 2069 scope.go:117] "RemoveContainer" containerID="5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620" Sep 10 00:48:38.823128 kubelet[2069]: E0910 00:48:38.822868 2069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\": not found" containerID="5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620" Sep 10 00:48:38.823128 kubelet[2069]: I0910 00:48:38.822911 2069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620"} err="failed to get container status \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\": rpc error: code = NotFound desc = an error occurred when try to find container \"5549acc5ddd6588126c118030c10f67afedcf53a6b6a65691754b31789e03620\": not found" Sep 10 00:48:38.866071 kubelet[2069]: E0910 00:48:38.865917 2069 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9a35441-4fbe-4ba3-ad1e-21166c9e891f" containerName="mount-cgroup" Sep 10 00:48:38.866338 kubelet[2069]: I0910 00:48:38.866317 2069 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9a35441-4fbe-4ba3-ad1e-21166c9e891f" containerName="mount-cgroup" Sep 10 00:48:38.902852 kubelet[2069]: I0910 00:48:38.902734 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1454389d-1339-4406-9504-711a46b5c72e-cilium-run\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.902852 kubelet[2069]: I0910 00:48:38.902850 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1454389d-1339-4406-9504-711a46b5c72e-clustermesh-secrets\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.903219 kubelet[2069]: I0910 00:48:38.902879 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1454389d-1339-4406-9504-711a46b5c72e-cni-path\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.903219 kubelet[2069]: I0910 00:48:38.902917 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pctbj\" (UniqueName: \"kubernetes.io/projected/1454389d-1339-4406-9504-711a46b5c72e-kube-api-access-pctbj\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.903219 kubelet[2069]: I0910 00:48:38.902941 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1454389d-1339-4406-9504-711a46b5c72e-host-proc-sys-net\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.903219 kubelet[2069]: I0910 00:48:38.902960 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1454389d-1339-4406-9504-711a46b5c72e-lib-modules\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.903219 kubelet[2069]: I0910 00:48:38.902979 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1454389d-1339-4406-9504-711a46b5c72e-hubble-tls\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.903219 kubelet[2069]: I0910 00:48:38.902999 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1454389d-1339-4406-9504-711a46b5c72e-xtables-lock\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.903432 kubelet[2069]: I0910 00:48:38.903016 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1454389d-1339-4406-9504-711a46b5c72e-host-proc-sys-kernel\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.903432 kubelet[2069]: I0910 00:48:38.903037 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1454389d-1339-4406-9504-711a46b5c72e-hostproc\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.903432 kubelet[2069]: I0910 00:48:38.903055 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1454389d-1339-4406-9504-711a46b5c72e-cilium-cgroup\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.903432 kubelet[2069]: I0910 00:48:38.903075 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1454389d-1339-4406-9504-711a46b5c72e-etc-cni-netd\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.903432 kubelet[2069]: I0910 00:48:38.903094 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1454389d-1339-4406-9504-711a46b5c72e-cilium-ipsec-secrets\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.903432 kubelet[2069]: I0910 00:48:38.903112 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1454389d-1339-4406-9504-711a46b5c72e-bpf-maps\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:38.903658 kubelet[2069]: I0910 00:48:38.903133 2069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1454389d-1339-4406-9504-711a46b5c72e-cilium-config-path\") pod \"cilium-6fx2d\" (UID: \"1454389d-1339-4406-9504-711a46b5c72e\") " pod="kube-system/cilium-6fx2d" Sep 10 00:48:39.188685 kubelet[2069]: E0910 00:48:39.188523 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:39.189449 env[1317]: time="2025-09-10T00:48:39.189175576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6fx2d,Uid:1454389d-1339-4406-9504-711a46b5c72e,Namespace:kube-system,Attempt:0,}" Sep 10 00:48:39.203738 env[1317]: time="2025-09-10T00:48:39.203635828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:48:39.203738 env[1317]: time="2025-09-10T00:48:39.203683097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:48:39.203738 env[1317]: time="2025-09-10T00:48:39.203700209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:48:39.203978 env[1317]: time="2025-09-10T00:48:39.203848240Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed9a6657af4d88a8f6f7c21f0ebaa84619c7dc84796d8e64836bf8fe2b7665af pid=4101 runtime=io.containerd.runc.v2 Sep 10 00:48:39.233950 env[1317]: time="2025-09-10T00:48:39.233871099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6fx2d,Uid:1454389d-1339-4406-9504-711a46b5c72e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed9a6657af4d88a8f6f7c21f0ebaa84619c7dc84796d8e64836bf8fe2b7665af\"" Sep 10 00:48:39.234992 kubelet[2069]: E0910 00:48:39.234964 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:39.237400 env[1317]: time="2025-09-10T00:48:39.237344023Z" level=info msg="CreateContainer within sandbox \"ed9a6657af4d88a8f6f7c21f0ebaa84619c7dc84796d8e64836bf8fe2b7665af\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:48:39.256337 env[1317]: time="2025-09-10T00:48:39.256150661Z" level=info msg="CreateContainer within sandbox \"ed9a6657af4d88a8f6f7c21f0ebaa84619c7dc84796d8e64836bf8fe2b7665af\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee4cb09dc8e678cb73c6f15e0672804af2619f56bdaa7e12a9e726c126915f78\"" Sep 10 00:48:39.256967 env[1317]: time="2025-09-10T00:48:39.256921601Z" level=info msg="StartContainer for \"ee4cb09dc8e678cb73c6f15e0672804af2619f56bdaa7e12a9e726c126915f78\"" Sep 10 00:48:39.306316 env[1317]: time="2025-09-10T00:48:39.306242479Z" level=info msg="StartContainer for \"ee4cb09dc8e678cb73c6f15e0672804af2619f56bdaa7e12a9e726c126915f78\" returns successfully" Sep 10 00:48:39.349698 env[1317]: time="2025-09-10T00:48:39.349042844Z" level=info msg="shim disconnected" id=ee4cb09dc8e678cb73c6f15e0672804af2619f56bdaa7e12a9e726c126915f78 Sep 10 00:48:39.349698 env[1317]: time="2025-09-10T00:48:39.349114740Z" level=warning msg="cleaning up after shim disconnected" id=ee4cb09dc8e678cb73c6f15e0672804af2619f56bdaa7e12a9e726c126915f78 namespace=k8s.io Sep 10 00:48:39.349698 env[1317]: time="2025-09-10T00:48:39.349128226Z" level=info msg="cleaning up dead shim" Sep 10 00:48:39.359654 env[1317]: time="2025-09-10T00:48:39.359583216Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:48:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4183 runtime=io.containerd.runc.v2\n" Sep 10 00:48:39.521965 kubelet[2069]: I0910 00:48:39.521766 2069 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9a35441-4fbe-4ba3-ad1e-21166c9e891f" path="/var/lib/kubelet/pods/b9a35441-4fbe-4ba3-ad1e-21166c9e891f/volumes" Sep 10 00:48:39.820057 kubelet[2069]: E0910 00:48:39.819835 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:39.821477 env[1317]: time="2025-09-10T00:48:39.821436854Z" level=info msg="CreateContainer within sandbox \"ed9a6657af4d88a8f6f7c21f0ebaa84619c7dc84796d8e64836bf8fe2b7665af\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:48:39.834721 env[1317]: time="2025-09-10T00:48:39.834664161Z" level=info msg="CreateContainer within sandbox \"ed9a6657af4d88a8f6f7c21f0ebaa84619c7dc84796d8e64836bf8fe2b7665af\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e0c70ac44dcb51907c4b15191967c364d37ebdaac7c9e1ff80253974944ef0d8\"" Sep 10 00:48:39.837593 env[1317]: time="2025-09-10T00:48:39.837547398Z" level=info msg="StartContainer for \"e0c70ac44dcb51907c4b15191967c364d37ebdaac7c9e1ff80253974944ef0d8\"" Sep 10 00:48:39.891081 env[1317]: time="2025-09-10T00:48:39.891023442Z" level=info msg="StartContainer for \"e0c70ac44dcb51907c4b15191967c364d37ebdaac7c9e1ff80253974944ef0d8\" returns successfully" Sep 10 00:48:39.917437 env[1317]: time="2025-09-10T00:48:39.917381363Z" level=info msg="shim disconnected" id=e0c70ac44dcb51907c4b15191967c364d37ebdaac7c9e1ff80253974944ef0d8 Sep 10 00:48:39.917437 env[1317]: time="2025-09-10T00:48:39.917430837Z" level=warning msg="cleaning up after shim disconnected" id=e0c70ac44dcb51907c4b15191967c364d37ebdaac7c9e1ff80253974944ef0d8 namespace=k8s.io Sep 10 00:48:39.917437 env[1317]: time="2025-09-10T00:48:39.917439744Z" level=info msg="cleaning up dead shim" Sep 10 00:48:39.924773 env[1317]: time="2025-09-10T00:48:39.924714104Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:48:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4245 runtime=io.containerd.runc.v2\n" Sep 10 00:48:40.600312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0c70ac44dcb51907c4b15191967c364d37ebdaac7c9e1ff80253974944ef0d8-rootfs.mount: Deactivated successfully. Sep 10 00:48:40.826201 kubelet[2069]: E0910 00:48:40.826161 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:40.828082 env[1317]: time="2025-09-10T00:48:40.828029100Z" level=info msg="CreateContainer within sandbox \"ed9a6657af4d88a8f6f7c21f0ebaa84619c7dc84796d8e64836bf8fe2b7665af\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:48:40.843775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1508897533.mount: Deactivated successfully. Sep 10 00:48:40.846782 env[1317]: time="2025-09-10T00:48:40.846614342Z" level=info msg="CreateContainer within sandbox \"ed9a6657af4d88a8f6f7c21f0ebaa84619c7dc84796d8e64836bf8fe2b7665af\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e847f6de4f764fc40c4cd67c98b6cf00825cffa8f8d6f8b25603f35b1dcdf5ac\"" Sep 10 00:48:40.847302 env[1317]: time="2025-09-10T00:48:40.847265135Z" level=info msg="StartContainer for \"e847f6de4f764fc40c4cd67c98b6cf00825cffa8f8d6f8b25603f35b1dcdf5ac\"" Sep 10 00:48:40.909838 env[1317]: time="2025-09-10T00:48:40.909416516Z" level=info msg="StartContainer for \"e847f6de4f764fc40c4cd67c98b6cf00825cffa8f8d6f8b25603f35b1dcdf5ac\" returns successfully" Sep 10 00:48:40.981548 env[1317]: time="2025-09-10T00:48:40.981475439Z" level=info msg="shim disconnected" id=e847f6de4f764fc40c4cd67c98b6cf00825cffa8f8d6f8b25603f35b1dcdf5ac Sep 10 00:48:40.981548 env[1317]: time="2025-09-10T00:48:40.981540152Z" level=warning msg="cleaning up after shim disconnected" id=e847f6de4f764fc40c4cd67c98b6cf00825cffa8f8d6f8b25603f35b1dcdf5ac namespace=k8s.io Sep 10 00:48:40.981548 env[1317]: time="2025-09-10T00:48:40.981550041Z" level=info msg="cleaning up dead shim" Sep 10 00:48:40.990258 env[1317]: time="2025-09-10T00:48:40.990165656Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:48:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4300 runtime=io.containerd.runc.v2\n" Sep 10 00:48:41.600157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e847f6de4f764fc40c4cd67c98b6cf00825cffa8f8d6f8b25603f35b1dcdf5ac-rootfs.mount: Deactivated successfully. Sep 10 00:48:41.831184 kubelet[2069]: E0910 00:48:41.831114 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:41.833424 env[1317]: time="2025-09-10T00:48:41.833337771Z" level=info msg="CreateContainer within sandbox \"ed9a6657af4d88a8f6f7c21f0ebaa84619c7dc84796d8e64836bf8fe2b7665af\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:48:41.856205 env[1317]: time="2025-09-10T00:48:41.855855998Z" level=info msg="CreateContainer within sandbox \"ed9a6657af4d88a8f6f7c21f0ebaa84619c7dc84796d8e64836bf8fe2b7665af\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6e7c6299ff9d73b6a07579ad058512c2c86e3add7c274792663bd96416bbe02b\"" Sep 10 00:48:41.857183 env[1317]: time="2025-09-10T00:48:41.857121313Z" level=info msg="StartContainer for \"6e7c6299ff9d73b6a07579ad058512c2c86e3add7c274792663bd96416bbe02b\"" Sep 10 00:48:41.857524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1189110035.mount: Deactivated successfully. Sep 10 00:48:41.912684 env[1317]: time="2025-09-10T00:48:41.912625797Z" level=info msg="StartContainer for \"6e7c6299ff9d73b6a07579ad058512c2c86e3add7c274792663bd96416bbe02b\" returns successfully" Sep 10 00:48:41.934333 env[1317]: time="2025-09-10T00:48:41.934278147Z" level=info msg="shim disconnected" id=6e7c6299ff9d73b6a07579ad058512c2c86e3add7c274792663bd96416bbe02b Sep 10 00:48:41.934333 env[1317]: time="2025-09-10T00:48:41.934327461Z" level=warning msg="cleaning up after shim disconnected" id=6e7c6299ff9d73b6a07579ad058512c2c86e3add7c274792663bd96416bbe02b namespace=k8s.io Sep 10 00:48:41.934333 env[1317]: time="2025-09-10T00:48:41.934336428Z" level=info msg="cleaning up dead shim" Sep 10 00:48:41.942207 env[1317]: time="2025-09-10T00:48:41.942107091Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:48:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4355 runtime=io.containerd.runc.v2\n" Sep 10 00:48:42.600500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e7c6299ff9d73b6a07579ad058512c2c86e3add7c274792663bd96416bbe02b-rootfs.mount: Deactivated successfully. Sep 10 00:48:42.835269 kubelet[2069]: E0910 00:48:42.835230 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:42.837604 env[1317]: time="2025-09-10T00:48:42.837540885Z" level=info msg="CreateContainer within sandbox \"ed9a6657af4d88a8f6f7c21f0ebaa84619c7dc84796d8e64836bf8fe2b7665af\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:48:42.849986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2285052000.mount: Deactivated successfully. Sep 10 00:48:42.960008 env[1317]: time="2025-09-10T00:48:42.959790562Z" level=info msg="CreateContainer within sandbox \"ed9a6657af4d88a8f6f7c21f0ebaa84619c7dc84796d8e64836bf8fe2b7665af\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c0cd1bdea2b1d90a3903669d122356fb0178628ec6085efe3691e56efce31036\"" Sep 10 00:48:42.961117 env[1317]: time="2025-09-10T00:48:42.960774574Z" level=info msg="StartContainer for \"c0cd1bdea2b1d90a3903669d122356fb0178628ec6085efe3691e56efce31036\"" Sep 10 00:48:43.066944 env[1317]: time="2025-09-10T00:48:43.066847637Z" level=info msg="StartContainer for \"c0cd1bdea2b1d90a3903669d122356fb0178628ec6085efe3691e56efce31036\" returns successfully" Sep 10 00:48:43.366930 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 10 00:48:43.840704 kubelet[2069]: E0910 00:48:43.840653 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:45.190312 kubelet[2069]: E0910 00:48:45.190261 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:46.320814 systemd-networkd[1076]: lxc_health: Link UP Sep 10 00:48:46.334739 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 10 00:48:46.331686 systemd-networkd[1076]: lxc_health: Gained carrier Sep 10 00:48:47.190920 kubelet[2069]: E0910 00:48:47.190855 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:47.210259 kubelet[2069]: I0910 00:48:47.210182 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6fx2d" podStartSLOduration=9.210156286 podStartE2EDuration="9.210156286s" podCreationTimestamp="2025-09-10 00:48:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:48:43.858789809 +0000 UTC m=+110.540385006" watchObservedRunningTime="2025-09-10 00:48:47.210156286 +0000 UTC m=+113.891751483" Sep 10 00:48:47.518947 kubelet[2069]: E0910 00:48:47.518755 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:47.834166 systemd-networkd[1076]: lxc_health: Gained IPv6LL Sep 10 00:48:47.849410 kubelet[2069]: E0910 00:48:47.849378 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:48.851007 kubelet[2069]: E0910 00:48:48.850958 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:48:52.374979 systemd[1]: run-containerd-runc-k8s.io-c0cd1bdea2b1d90a3903669d122356fb0178628ec6085efe3691e56efce31036-runc.kCc8jZ.mount: Deactivated successfully. Sep 10 00:48:52.435825 sshd[3933]: pam_unix(sshd:session): session closed for user core Sep 10 00:48:52.439278 systemd[1]: sshd@27-10.0.0.93:22-10.0.0.1:38868.service: Deactivated successfully. Sep 10 00:48:52.440653 systemd[1]: session-28.scope: Deactivated successfully. Sep 10 00:48:52.440674 systemd-logind[1293]: Session 28 logged out. Waiting for processes to exit. Sep 10 00:48:52.442121 systemd-logind[1293]: Removed session 28. Sep 10 00:48:53.512861 env[1317]: time="2025-09-10T00:48:53.512805868Z" level=info msg="StopPodSandbox for \"ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd\"" Sep 10 00:48:53.513388 env[1317]: time="2025-09-10T00:48:53.512907800Z" level=info msg="TearDown network for sandbox \"ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd\" successfully" Sep 10 00:48:53.513388 env[1317]: time="2025-09-10T00:48:53.512941454Z" level=info msg="StopPodSandbox for \"ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd\" returns successfully" Sep 10 00:48:53.513388 env[1317]: time="2025-09-10T00:48:53.513302395Z" level=info msg="RemovePodSandbox for \"ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd\"" Sep 10 00:48:53.513388 env[1317]: time="2025-09-10T00:48:53.513330689Z" level=info msg="Forcibly stopping sandbox \"ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd\"" Sep 10 00:48:53.513531 env[1317]: time="2025-09-10T00:48:53.513399499Z" level=info msg="TearDown network for sandbox \"ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd\" successfully" Sep 10 00:48:53.557365 env[1317]: time="2025-09-10T00:48:53.557294750Z" level=info msg="RemovePodSandbox \"ff33ffbadb2a18183c7fc381972cfe30d9afc2547baec649f2b467a40bd5debd\" returns successfully" Sep 10 00:48:53.557975 env[1317]: time="2025-09-10T00:48:53.557945911Z" level=info msg="StopPodSandbox for \"2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89\"" Sep 10 00:48:53.558116 env[1317]: time="2025-09-10T00:48:53.558065536Z" level=info msg="TearDown network for sandbox \"2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89\" successfully" Sep 10 00:48:53.558170 env[1317]: time="2025-09-10T00:48:53.558114208Z" level=info msg="StopPodSandbox for \"2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89\" returns successfully" Sep 10 00:48:53.558409 env[1317]: time="2025-09-10T00:48:53.558384068Z" level=info msg="RemovePodSandbox for \"2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89\"" Sep 10 00:48:53.558487 env[1317]: time="2025-09-10T00:48:53.558414405Z" level=info msg="Forcibly stopping sandbox \"2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89\"" Sep 10 00:48:53.558535 env[1317]: time="2025-09-10T00:48:53.558491090Z" level=info msg="TearDown network for sandbox \"2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89\" successfully" Sep 10 00:48:53.562372 env[1317]: time="2025-09-10T00:48:53.562339809Z" level=info msg="RemovePodSandbox \"2365b82fc9f8f43c4cf0ee464894d4e060c8f73d84040721b35e23d100b37d89\" returns successfully" Sep 10 00:48:53.562668 env[1317]: time="2025-09-10T00:48:53.562636570Z" level=info msg="StopPodSandbox for \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\"" Sep 10 00:48:53.562776 env[1317]: time="2025-09-10T00:48:53.562722693Z" level=info msg="TearDown network for sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" successfully" Sep 10 00:48:53.562776 env[1317]: time="2025-09-10T00:48:53.562767677Z" level=info msg="StopPodSandbox for \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" returns successfully" Sep 10 00:48:53.563045 env[1317]: time="2025-09-10T00:48:53.563012941Z" level=info msg="RemovePodSandbox for \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\"" Sep 10 00:48:53.563101 env[1317]: time="2025-09-10T00:48:53.563049550Z" level=info msg="Forcibly stopping sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\"" Sep 10 00:48:53.563141 env[1317]: time="2025-09-10T00:48:53.563116657Z" level=info msg="TearDown network for sandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" successfully" Sep 10 00:48:53.566590 env[1317]: time="2025-09-10T00:48:53.566552657Z" level=info msg="RemovePodSandbox \"38b58acc6a72d8ecebd6c8bd02db288b80d54aac8574a74378934c9ad4fdc205\" returns successfully"