Jul 14 22:38:37.924666 kernel: Linux version 5.15.187-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 14 20:42:36 -00 2025 Jul 14 22:38:37.924710 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d9618a329f89744ce954b0fa1b02ce8164745af7389f9de9c3421ad2087e0dba Jul 14 22:38:37.924722 kernel: BIOS-provided physical RAM map: Jul 14 22:38:37.924730 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 14 22:38:37.924749 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 14 22:38:37.924758 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 14 22:38:37.924768 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 14 22:38:37.924776 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 14 22:38:37.924786 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 14 22:38:37.924794 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 14 22:38:37.924816 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 14 22:38:37.924824 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 14 22:38:37.924831 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 14 22:38:37.924838 kernel: NX (Execute Disable) protection: active Jul 14 22:38:37.924851 kernel: SMBIOS 2.8 present. Jul 14 22:38:37.924859 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 14 22:38:37.924894 kernel: Hypervisor detected: KVM Jul 14 22:38:37.924903 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 14 22:38:37.924911 kernel: kvm-clock: cpu 0, msr 6019b001, primary cpu clock Jul 14 22:38:37.924919 kernel: kvm-clock: using sched offset of 2801122706 cycles Jul 14 22:38:37.924928 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 14 22:38:37.924945 kernel: tsc: Detected 2794.750 MHz processor Jul 14 22:38:37.924968 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 22:38:37.924980 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 22:38:37.924988 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 14 22:38:37.924997 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 22:38:37.925019 kernel: Using GB pages for direct mapping Jul 14 22:38:37.925029 kernel: ACPI: Early table checksum verification disabled Jul 14 22:38:37.925037 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 14 22:38:37.925045 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:38:37.925054 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:38:37.925062 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:38:37.925073 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 14 22:38:37.925082 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:38:37.925090 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:38:37.925098 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:38:37.925106 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:38:37.925114 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 14 22:38:37.925136 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 14 22:38:37.925144 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 14 22:38:37.925157 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 14 22:38:37.925166 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 14 22:38:37.925175 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 14 22:38:37.925184 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 14 22:38:37.925193 kernel: No NUMA configuration found Jul 14 22:38:37.925202 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 14 22:38:37.925214 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 14 22:38:37.925223 kernel: Zone ranges: Jul 14 22:38:37.925248 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 22:38:37.925258 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 14 22:38:37.925267 kernel: Normal empty Jul 14 22:38:37.925276 kernel: Movable zone start for each node Jul 14 22:38:37.925284 kernel: Early memory node ranges Jul 14 22:38:37.925293 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 14 22:38:37.925301 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 14 22:38:37.925312 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 14 22:38:37.925326 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:38:37.925348 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 14 22:38:37.925358 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 14 22:38:37.925367 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 14 22:38:37.925376 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 14 22:38:37.925385 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 14 22:38:37.925394 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 14 22:38:37.925406 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 14 22:38:37.925415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 14 22:38:37.925429 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 14 22:38:37.925438 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 14 22:38:37.925446 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 22:38:37.925471 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 14 22:38:37.925479 kernel: TSC deadline timer available Jul 14 22:38:37.925488 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 14 22:38:37.925497 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 14 22:38:37.925506 kernel: kvm-guest: setup PV sched yield Jul 14 22:38:37.925515 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 14 22:38:37.925527 kernel: Booting paravirtualized kernel on KVM Jul 14 22:38:37.925550 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 22:38:37.925561 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 14 22:38:37.925570 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 14 22:38:37.925579 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 14 22:38:37.925587 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 14 22:38:37.925596 kernel: kvm-guest: setup async PF for cpu 0 Jul 14 22:38:37.925604 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Jul 14 22:38:37.925613 kernel: kvm-guest: PV spinlocks enabled Jul 14 22:38:37.925637 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 14 22:38:37.925645 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 14 22:38:37.925654 kernel: Policy zone: DMA32 Jul 14 22:38:37.925664 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d9618a329f89744ce954b0fa1b02ce8164745af7389f9de9c3421ad2087e0dba Jul 14 22:38:37.925674 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:38:37.925684 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 22:38:37.925703 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:38:37.925718 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:38:37.925731 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47476K init, 4104K bss, 134796K reserved, 0K cma-reserved) Jul 14 22:38:37.925740 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 22:38:37.925764 kernel: ftrace: allocating 34607 entries in 136 pages Jul 14 22:38:37.925773 kernel: ftrace: allocated 136 pages with 2 groups Jul 14 22:38:37.925782 kernel: rcu: Hierarchical RCU implementation. Jul 14 22:38:37.925791 kernel: rcu: RCU event tracing is enabled. Jul 14 22:38:37.925800 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 22:38:37.925824 kernel: Rude variant of Tasks RCU enabled. Jul 14 22:38:37.925833 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:38:37.925845 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:38:37.925854 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 22:38:37.925890 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 14 22:38:37.925900 kernel: random: crng init done Jul 14 22:38:37.925909 kernel: Console: colour VGA+ 80x25 Jul 14 22:38:37.925934 kernel: printk: console [ttyS0] enabled Jul 14 22:38:37.925966 kernel: ACPI: Core revision 20210730 Jul 14 22:38:37.925977 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 14 22:38:37.925986 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 22:38:37.925998 kernel: x2apic enabled Jul 14 22:38:37.926022 kernel: Switched APIC routing to physical x2apic. Jul 14 22:38:37.926031 kernel: kvm-guest: setup PV IPIs Jul 14 22:38:37.926040 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 22:38:37.926049 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 14 22:38:37.926076 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 14 22:38:37.926085 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 14 22:38:37.926093 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 14 22:38:37.926109 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 14 22:38:37.926131 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 22:38:37.926141 kernel: Spectre V2 : Mitigation: Retpolines Jul 14 22:38:37.926151 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 14 22:38:37.926162 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 14 22:38:37.926186 kernel: RETBleed: Mitigation: untrained return thunk Jul 14 22:38:37.926197 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 22:38:37.926207 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 14 22:38:37.926216 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 22:38:37.926226 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 22:38:37.926253 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 22:38:37.926262 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 22:38:37.926272 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 14 22:38:37.926281 kernel: Freeing SMP alternatives memory: 32K Jul 14 22:38:37.926303 kernel: pid_max: default: 32768 minimum: 301 Jul 14 22:38:37.926314 kernel: LSM: Security Framework initializing Jul 14 22:38:37.926323 kernel: SELinux: Initializing. Jul 14 22:38:37.926336 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:38:37.926346 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:38:37.926368 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 14 22:38:37.926379 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 14 22:38:37.926388 kernel: ... version: 0 Jul 14 22:38:37.926397 kernel: ... bit width: 48 Jul 14 22:38:37.926406 kernel: ... generic registers: 6 Jul 14 22:38:37.926428 kernel: ... value mask: 0000ffffffffffff Jul 14 22:38:37.926439 kernel: ... max period: 00007fffffffffff Jul 14 22:38:37.926452 kernel: ... fixed-purpose events: 0 Jul 14 22:38:37.926461 kernel: ... event mask: 000000000000003f Jul 14 22:38:37.926484 kernel: signal: max sigframe size: 1776 Jul 14 22:38:37.926495 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:38:37.926504 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:38:37.926514 kernel: x86: Booting SMP configuration: Jul 14 22:38:37.926537 kernel: .... node #0, CPUs: #1 Jul 14 22:38:37.926547 kernel: kvm-clock: cpu 1, msr 6019b041, secondary cpu clock Jul 14 22:38:37.926555 kernel: kvm-guest: setup async PF for cpu 1 Jul 14 22:38:37.926564 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Jul 14 22:38:37.926584 kernel: #2 Jul 14 22:38:37.926597 kernel: kvm-clock: cpu 2, msr 6019b081, secondary cpu clock Jul 14 22:38:37.926606 kernel: kvm-guest: setup async PF for cpu 2 Jul 14 22:38:37.926616 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Jul 14 22:38:37.926626 kernel: #3 Jul 14 22:38:37.926635 kernel: kvm-clock: cpu 3, msr 6019b0c1, secondary cpu clock Jul 14 22:38:37.926644 kernel: kvm-guest: setup async PF for cpu 3 Jul 14 22:38:37.926654 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Jul 14 22:38:37.926664 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 22:38:37.926677 kernel: smpboot: Max logical packages: 1 Jul 14 22:38:37.926686 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 14 22:38:37.926696 kernel: devtmpfs: initialized Jul 14 22:38:37.926705 kernel: x86/mm: Memory block size: 128MB Jul 14 22:38:37.926714 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:38:37.926724 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 22:38:37.926733 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:38:37.926741 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:38:37.926751 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:38:37.926763 kernel: audit: type=2000 audit(1752532717.309:1): state=initialized audit_enabled=0 res=1 Jul 14 22:38:37.926773 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:38:37.926782 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 22:38:37.926792 kernel: cpuidle: using governor menu Jul 14 22:38:37.926801 kernel: ACPI: bus type PCI registered Jul 14 22:38:37.926810 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:38:37.926819 kernel: dca service started, version 1.12.1 Jul 14 22:38:37.926829 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 14 22:38:37.926838 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 14 22:38:37.926849 kernel: PCI: Using configuration type 1 for base access Jul 14 22:38:37.926859 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 22:38:37.926894 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:38:37.926922 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:38:37.926933 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:38:37.926942 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:38:37.926962 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:38:37.926971 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 14 22:38:37.926981 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 14 22:38:37.926994 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 14 22:38:37.927003 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:38:37.927012 kernel: ACPI: Interpreter enabled Jul 14 22:38:37.927020 kernel: ACPI: PM: (supports S0 S3 S5) Jul 14 22:38:37.927029 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 22:38:37.927038 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 22:38:37.927047 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 14 22:38:37.927055 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 22:38:37.927228 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:38:37.927344 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 14 22:38:37.927449 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 14 22:38:37.927465 kernel: PCI host bridge to bus 0000:00 Jul 14 22:38:37.927584 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 22:38:37.927692 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 14 22:38:37.927791 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 22:38:37.927906 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 14 22:38:37.928013 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 22:38:37.928105 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 14 22:38:37.928199 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 22:38:37.928337 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 14 22:38:37.928463 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 14 22:38:37.928569 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 14 22:38:37.928682 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 14 22:38:37.928790 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 14 22:38:37.928941 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 22:38:37.929087 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 22:38:37.929198 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 14 22:38:37.929311 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 14 22:38:37.929419 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 14 22:38:37.929557 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 14 22:38:37.929669 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 14 22:38:37.929780 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 14 22:38:37.929941 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 14 22:38:37.930076 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 14 22:38:37.930190 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 14 22:38:37.930301 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 14 22:38:37.930407 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 14 22:38:37.930516 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 14 22:38:37.930634 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 14 22:38:37.930744 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 14 22:38:37.930881 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 14 22:38:37.931006 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 14 22:38:37.931120 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 14 22:38:37.931245 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 14 22:38:37.931352 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 14 22:38:37.931368 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 14 22:38:37.931377 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 14 22:38:37.931387 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 22:38:37.931396 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 14 22:38:37.931407 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 14 22:38:37.931423 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 14 22:38:37.931435 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 14 22:38:37.931444 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 14 22:38:37.931453 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 14 22:38:37.931463 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 14 22:38:37.931472 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 14 22:38:37.931482 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 14 22:38:37.931491 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 14 22:38:37.931500 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 14 22:38:37.931513 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 14 22:38:37.931522 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 14 22:38:37.931531 kernel: iommu: Default domain type: Translated Jul 14 22:38:37.931540 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 22:38:37.931651 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 14 22:38:37.931757 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 22:38:37.931889 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 14 22:38:37.931906 kernel: vgaarb: loaded Jul 14 22:38:37.931920 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 14 22:38:37.931929 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 14 22:38:37.931939 kernel: PTP clock support registered Jul 14 22:38:37.931959 kernel: PCI: Using ACPI for IRQ routing Jul 14 22:38:37.931969 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 22:38:37.931978 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 14 22:38:37.931988 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 14 22:38:37.931997 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 14 22:38:37.932006 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 14 22:38:37.932018 kernel: clocksource: Switched to clocksource kvm-clock Jul 14 22:38:37.932027 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:38:37.932036 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:38:37.932045 kernel: pnp: PnP ACPI init Jul 14 22:38:37.932178 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 14 22:38:37.932195 kernel: pnp: PnP ACPI: found 6 devices Jul 14 22:38:37.932205 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 22:38:37.932214 kernel: NET: Registered PF_INET protocol family Jul 14 22:38:37.932223 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 22:38:37.932237 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 22:38:37.932246 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:38:37.932256 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:38:37.932265 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 14 22:38:37.932275 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 22:38:37.932285 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:38:37.932293 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:38:37.932303 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:38:37.932315 kernel: NET: Registered PF_XDP protocol family Jul 14 22:38:37.932413 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 14 22:38:37.932508 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 14 22:38:37.932597 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 14 22:38:37.932691 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 14 22:38:37.932795 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 14 22:38:37.932988 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 14 22:38:37.933006 kernel: PCI: CLS 0 bytes, default 64 Jul 14 22:38:37.933015 kernel: Initialise system trusted keyrings Jul 14 22:38:37.933028 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 22:38:37.933037 kernel: Key type asymmetric registered Jul 14 22:38:37.933045 kernel: Asymmetric key parser 'x509' registered Jul 14 22:38:37.933054 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 14 22:38:37.933063 kernel: io scheduler mq-deadline registered Jul 14 22:38:37.933072 kernel: io scheduler kyber registered Jul 14 22:38:37.933081 kernel: io scheduler bfq registered Jul 14 22:38:37.933090 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 22:38:37.933101 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 14 22:38:37.933114 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 14 22:38:37.933124 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 14 22:38:37.933133 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:38:37.933144 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 22:38:37.933154 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 14 22:38:37.933163 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 22:38:37.933173 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 22:38:37.933292 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 14 22:38:37.933313 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 22:38:37.933406 kernel: rtc_cmos 00:04: registered as rtc0 Jul 14 22:38:37.933503 kernel: rtc_cmos 00:04: setting system clock to 2025-07-14T22:38:37 UTC (1752532717) Jul 14 22:38:37.933595 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 14 22:38:37.933610 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:38:37.933620 kernel: Segment Routing with IPv6 Jul 14 22:38:37.933630 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:38:37.933638 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:38:37.933648 kernel: Key type dns_resolver registered Jul 14 22:38:37.933661 kernel: IPI shorthand broadcast: enabled Jul 14 22:38:37.933671 kernel: sched_clock: Marking stable (455001476, 159655386)->(732305672, -117648810) Jul 14 22:38:37.933681 kernel: registered taskstats version 1 Jul 14 22:38:37.933691 kernel: Loading compiled-in X.509 certificates Jul 14 22:38:37.933700 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.187-flatcar: 14a6940dcbc00bab0c83ae71c4abeb315720716d' Jul 14 22:38:37.933710 kernel: Key type .fscrypt registered Jul 14 22:38:37.933719 kernel: Key type fscrypt-provisioning registered Jul 14 22:38:37.933728 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:38:37.933742 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:38:37.933752 kernel: ima: No architecture policies found Jul 14 22:38:37.933761 kernel: clk: Disabling unused clocks Jul 14 22:38:37.933771 kernel: Freeing unused kernel image (initmem) memory: 47476K Jul 14 22:38:37.933780 kernel: Write protecting the kernel read-only data: 28672k Jul 14 22:38:37.933790 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 14 22:38:37.933799 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Jul 14 22:38:37.933808 kernel: Run /init as init process Jul 14 22:38:37.933817 kernel: with arguments: Jul 14 22:38:37.933829 kernel: /init Jul 14 22:38:37.933838 kernel: with environment: Jul 14 22:38:37.933849 kernel: HOME=/ Jul 14 22:38:37.933858 kernel: TERM=linux Jul 14 22:38:37.933884 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:38:37.933897 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 22:38:37.933910 systemd[1]: Detected virtualization kvm. Jul 14 22:38:37.933920 systemd[1]: Detected architecture x86-64. Jul 14 22:38:37.933932 systemd[1]: Running in initrd. Jul 14 22:38:37.933941 systemd[1]: No hostname configured, using default hostname. Jul 14 22:38:37.933960 systemd[1]: Hostname set to . Jul 14 22:38:37.933971 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:38:37.933980 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:38:37.933991 systemd[1]: Started systemd-ask-password-console.path. Jul 14 22:38:37.934001 systemd[1]: Reached target cryptsetup.target. Jul 14 22:38:37.934010 systemd[1]: Reached target paths.target. Jul 14 22:38:37.934020 systemd[1]: Reached target slices.target. Jul 14 22:38:37.934033 systemd[1]: Reached target swap.target. Jul 14 22:38:37.934051 systemd[1]: Reached target timers.target. Jul 14 22:38:37.934064 systemd[1]: Listening on iscsid.socket. Jul 14 22:38:37.934073 systemd[1]: Listening on iscsiuio.socket. Jul 14 22:38:37.934083 systemd[1]: Listening on systemd-journald-audit.socket. Jul 14 22:38:37.934096 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 14 22:38:37.934106 systemd[1]: Listening on systemd-journald.socket. Jul 14 22:38:37.934117 systemd[1]: Listening on systemd-networkd.socket. Jul 14 22:38:37.934127 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 22:38:37.934138 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 22:38:37.934148 systemd[1]: Reached target sockets.target. Jul 14 22:38:37.934157 systemd[1]: Starting kmod-static-nodes.service... Jul 14 22:38:37.934167 systemd[1]: Finished network-cleanup.service. Jul 14 22:38:37.934177 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:38:37.934191 systemd[1]: Starting systemd-journald.service... Jul 14 22:38:37.934203 systemd[1]: Starting systemd-modules-load.service... Jul 14 22:38:37.934214 systemd[1]: Starting systemd-resolved.service... Jul 14 22:38:37.934224 systemd[1]: Starting systemd-vconsole-setup.service... Jul 14 22:38:37.934235 systemd[1]: Finished kmod-static-nodes.service. Jul 14 22:38:37.934248 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:38:37.934258 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 22:38:37.934271 systemd-journald[198]: Journal started Jul 14 22:38:37.934324 systemd-journald[198]: Runtime Journal (/run/log/journal/3865efbcc0c74b59a29b3f117d0244bd) is 6.0M, max 48.5M, 42.5M free. Jul 14 22:38:37.915933 systemd-modules-load[199]: Inserted module 'overlay' Jul 14 22:38:37.987703 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 22:38:37.987731 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:38:37.987744 kernel: Bridge firewalling registered Jul 14 22:38:37.987754 systemd[1]: Started systemd-journald.service. Jul 14 22:38:37.987764 kernel: audit: type=1130 audit(1752532717.979:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:37.987774 kernel: SCSI subsystem initialized Jul 14 22:38:37.987782 kernel: audit: type=1130 audit(1752532717.987:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:37.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:37.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:37.934178 systemd-resolved[200]: Positive Trust Anchors: Jul 14 22:38:38.010271 kernel: audit: type=1130 audit(1752532717.990:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:37.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:37.934186 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:38:38.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:37.934212 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 22:38:38.022618 kernel: audit: type=1130 audit(1752532718.009:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:37.936335 systemd-resolved[200]: Defaulting to hostname 'linux'. Jul 14 22:38:37.952890 systemd-modules-load[199]: Inserted module 'br_netfilter' Jul 14 22:38:37.987796 systemd[1]: Started systemd-resolved.service. Jul 14 22:38:37.991359 systemd[1]: Finished systemd-vconsole-setup.service. Jul 14 22:38:38.010802 systemd[1]: Reached target nss-lookup.target. Jul 14 22:38:38.015082 systemd[1]: Starting dracut-cmdline-ask.service... Jul 14 22:38:38.035550 systemd[1]: Finished dracut-cmdline-ask.service. Jul 14 22:38:38.040165 kernel: audit: type=1130 audit(1752532718.035:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:38.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:38.037083 systemd[1]: Starting dracut-cmdline.service... Jul 14 22:38:38.045769 dracut-cmdline[215]: dracut-dracut-053 Jul 14 22:38:38.048115 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:38:38.048132 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:38:38.048141 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 14 22:38:38.048154 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d9618a329f89744ce954b0fa1b02ce8164745af7389f9de9c3421ad2087e0dba Jul 14 22:38:38.055638 systemd-modules-load[199]: Inserted module 'dm_multipath' Jul 14 22:38:38.057321 systemd[1]: Finished systemd-modules-load.service. Jul 14 22:38:38.062572 kernel: audit: type=1130 audit(1752532718.057:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:38.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:38.058891 systemd[1]: Starting systemd-sysctl.service... Jul 14 22:38:38.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:38.067889 systemd[1]: Finished systemd-sysctl.service. Jul 14 22:38:38.072890 kernel: audit: type=1130 audit(1752532718.067:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:38.099900 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:38:38.129920 kernel: iscsi: registered transport (tcp) Jul 14 22:38:38.150920 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:38:38.151015 kernel: QLogic iSCSI HBA Driver Jul 14 22:38:38.181351 systemd[1]: Finished dracut-cmdline.service. Jul 14 22:38:38.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:38.215253 systemd[1]: Starting dracut-pre-udev.service... Jul 14 22:38:38.218652 kernel: audit: type=1130 audit(1752532718.214:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:38.260891 kernel: raid6: avx2x4 gen() 28666 MB/s Jul 14 22:38:38.288890 kernel: raid6: avx2x4 xor() 6666 MB/s Jul 14 22:38:38.305892 kernel: raid6: avx2x2 gen() 29825 MB/s Jul 14 22:38:38.322879 kernel: raid6: avx2x2 xor() 18963 MB/s Jul 14 22:38:38.339880 kernel: raid6: avx2x1 gen() 26225 MB/s Jul 14 22:38:38.356879 kernel: raid6: avx2x1 xor() 15314 MB/s Jul 14 22:38:38.373896 kernel: raid6: sse2x4 gen() 14702 MB/s Jul 14 22:38:38.390887 kernel: raid6: sse2x4 xor() 7434 MB/s Jul 14 22:38:38.407890 kernel: raid6: sse2x2 gen() 15906 MB/s Jul 14 22:38:38.424892 kernel: raid6: sse2x2 xor() 9627 MB/s Jul 14 22:38:38.441897 kernel: raid6: sse2x1 gen() 11721 MB/s Jul 14 22:38:38.459245 kernel: raid6: sse2x1 xor() 7517 MB/s Jul 14 22:38:38.459291 kernel: raid6: using algorithm avx2x2 gen() 29825 MB/s Jul 14 22:38:38.459304 kernel: raid6: .... xor() 18963 MB/s, rmw enabled Jul 14 22:38:38.459929 kernel: raid6: using avx2x2 recovery algorithm Jul 14 22:38:38.472889 kernel: xor: automatically using best checksumming function avx Jul 14 22:38:38.561890 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 14 22:38:38.570286 systemd[1]: Finished dracut-pre-udev.service. Jul 14 22:38:38.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:38.573000 audit: BPF prog-id=7 op=LOAD Jul 14 22:38:38.574890 kernel: audit: type=1130 audit(1752532718.570:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:38.574000 audit: BPF prog-id=8 op=LOAD Jul 14 22:38:38.575273 systemd[1]: Starting systemd-udevd.service... Jul 14 22:38:38.587285 systemd-udevd[399]: Using default interface naming scheme 'v252'. Jul 14 22:38:38.591071 systemd[1]: Started systemd-udevd.service. Jul 14 22:38:38.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:38.593674 systemd[1]: Starting dracut-pre-trigger.service... Jul 14 22:38:38.604787 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Jul 14 22:38:38.629619 systemd[1]: Finished dracut-pre-trigger.service. Jul 14 22:38:38.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:38.630842 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 22:38:38.668175 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 22:38:38.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:38.702899 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 22:38:38.730406 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 22:38:38.730430 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 22:38:38.730448 kernel: AES CTR mode by8 optimization enabled Jul 14 22:38:38.730459 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 22:38:38.730470 kernel: GPT:9289727 != 19775487 Jul 14 22:38:38.730481 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 22:38:38.730493 kernel: GPT:9289727 != 19775487 Jul 14 22:38:38.730504 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 22:38:38.730515 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:38:38.737888 kernel: libata version 3.00 loaded. Jul 14 22:38:38.745889 kernel: ahci 0000:00:1f.2: version 3.0 Jul 14 22:38:38.755470 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 14 22:38:38.755493 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 14 22:38:38.755630 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 14 22:38:38.755727 kernel: scsi host0: ahci Jul 14 22:38:38.755842 kernel: scsi host1: ahci Jul 14 22:38:38.755989 kernel: scsi host2: ahci Jul 14 22:38:38.756074 kernel: scsi host3: ahci Jul 14 22:38:38.756167 kernel: scsi host4: ahci Jul 14 22:38:38.756268 kernel: scsi host5: ahci Jul 14 22:38:38.756347 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 14 22:38:38.756357 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 14 22:38:38.756366 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 14 22:38:38.756374 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 14 22:38:38.756383 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 14 22:38:38.756394 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 14 22:38:38.754877 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 14 22:38:38.795160 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (438) Jul 14 22:38:38.796217 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 14 22:38:38.797420 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 14 22:38:38.805718 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 14 22:38:38.813053 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 22:38:38.814987 systemd[1]: Starting disk-uuid.service... Jul 14 22:38:38.832006 disk-uuid[519]: Primary Header is updated. Jul 14 22:38:38.832006 disk-uuid[519]: Secondary Entries is updated. Jul 14 22:38:38.832006 disk-uuid[519]: Secondary Header is updated. Jul 14 22:38:38.836301 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:38:38.839898 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:38:38.842886 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:38:39.065630 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 14 22:38:39.065715 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 14 22:38:39.065726 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 14 22:38:39.065735 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 14 22:38:39.065743 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 14 22:38:39.066901 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 14 22:38:39.068031 kernel: ata3.00: applying bridge limits Jul 14 22:38:39.068893 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 14 22:38:39.069896 kernel: ata3.00: configured for UDMA/100 Jul 14 22:38:39.069938 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 14 22:38:39.101930 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 14 22:38:39.118432 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 22:38:39.118445 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 22:38:39.842926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:38:39.842989 disk-uuid[520]: The operation has completed successfully. Jul 14 22:38:39.869380 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:38:39.869489 systemd[1]: Finished disk-uuid.service. Jul 14 22:38:39.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:39.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:39.875871 systemd[1]: Starting verity-setup.service... Jul 14 22:38:39.889108 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 14 22:38:39.910541 systemd[1]: Found device dev-mapper-usr.device. Jul 14 22:38:39.912433 systemd[1]: Mounting sysusr-usr.mount... Jul 14 22:38:39.914039 systemd[1]: Finished verity-setup.service. Jul 14 22:38:39.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:39.999550 systemd[1]: Mounted sysusr-usr.mount. Jul 14 22:38:40.001097 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 14 22:38:40.000294 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 14 22:38:40.001102 systemd[1]: Starting ignition-setup.service... Jul 14 22:38:40.001974 systemd[1]: Starting parse-ip-for-networkd.service... Jul 14 22:38:40.014429 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:38:40.014475 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:38:40.014491 kernel: BTRFS info (device vda6): has skinny extents Jul 14 22:38:40.022565 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 22:38:40.032884 systemd[1]: Finished ignition-setup.service. Jul 14 22:38:40.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:40.035455 systemd[1]: Starting ignition-fetch-offline.service... Jul 14 22:38:40.072666 systemd[1]: Finished parse-ip-for-networkd.service. Jul 14 22:38:40.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:40.075000 audit: BPF prog-id=9 op=LOAD Jul 14 22:38:40.076492 systemd[1]: Starting systemd-networkd.service... Jul 14 22:38:40.102848 systemd-networkd[715]: lo: Link UP Jul 14 22:38:40.102856 systemd-networkd[715]: lo: Gained carrier Jul 14 22:38:40.103262 systemd-networkd[715]: Enumeration completed Jul 14 22:38:40.103349 systemd[1]: Started systemd-networkd.service. Jul 14 22:38:40.103568 systemd-networkd[715]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:38:40.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:40.104594 systemd-networkd[715]: eth0: Link UP Jul 14 22:38:40.104597 systemd-networkd[715]: eth0: Gained carrier Jul 14 22:38:40.105002 systemd[1]: Reached target network.target. Jul 14 22:38:40.109052 systemd[1]: Starting iscsiuio.service... Jul 14 22:38:40.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:40.112976 ignition[654]: Ignition 2.14.0 Jul 14 22:38:40.134068 systemd[1]: Started iscsiuio.service. Jul 14 22:38:40.112985 ignition[654]: Stage: fetch-offline Jul 14 22:38:40.153081 systemd[1]: Starting iscsid.service... Jul 14 22:38:40.113036 ignition[654]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:38:40.113045 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:38:40.113154 ignition[654]: parsed url from cmdline: "" Jul 14 22:38:40.113157 ignition[654]: no config URL provided Jul 14 22:38:40.113161 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:38:40.113168 ignition[654]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:38:40.113190 ignition[654]: op(1): [started] loading QEMU firmware config module Jul 14 22:38:40.163087 iscsid[721]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 14 22:38:40.163087 iscsid[721]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 14 22:38:40.163087 iscsid[721]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 14 22:38:40.163087 iscsid[721]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 14 22:38:40.163087 iscsid[721]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 14 22:38:40.163087 iscsid[721]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 14 22:38:40.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:40.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:40.113195 ignition[654]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 22:38:40.161562 systemd[1]: Started iscsid.service. Jul 14 22:38:40.161408 ignition[654]: op(1): [finished] loading QEMU firmware config module Jul 14 22:38:40.163943 systemd[1]: Starting dracut-initqueue.service... Jul 14 22:38:40.175170 systemd[1]: Finished dracut-initqueue.service. Jul 14 22:38:40.177091 systemd[1]: Reached target remote-fs-pre.target. Jul 14 22:38:40.178007 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 22:38:40.179596 systemd[1]: Reached target remote-fs.target. Jul 14 22:38:40.181013 systemd[1]: Starting dracut-pre-mount.service... Jul 14 22:38:40.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:40.189655 systemd[1]: Finished dracut-pre-mount.service. Jul 14 22:38:40.201022 systemd-networkd[715]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:38:40.334469 ignition[654]: parsing config with SHA512: e98ac4f6d5353988adc620aabeecbfc32a859da581bf5ce5297d53fc33f9a4b84785f4fb747a5596677edeb0b9f223fbc1b3b75b0395351f1b46e27419a0c272 Jul 14 22:38:40.343148 unknown[654]: fetched base config from "system" Jul 14 22:38:40.343163 unknown[654]: fetched user config from "qemu" Jul 14 22:38:40.345396 ignition[654]: fetch-offline: fetch-offline passed Jul 14 22:38:40.346274 ignition[654]: Ignition finished successfully Jul 14 22:38:40.347849 systemd[1]: Finished ignition-fetch-offline.service. Jul 14 22:38:40.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:40.348424 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:38:40.349708 systemd[1]: Starting ignition-kargs.service... Jul 14 22:38:40.360036 ignition[736]: Ignition 2.14.0 Jul 14 22:38:40.360047 ignition[736]: Stage: kargs Jul 14 22:38:40.360140 ignition[736]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:38:40.360152 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:38:40.363923 ignition[736]: kargs: kargs passed Jul 14 22:38:40.363965 ignition[736]: Ignition finished successfully Jul 14 22:38:40.366056 systemd[1]: Finished ignition-kargs.service. Jul 14 22:38:40.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:40.367752 systemd[1]: Starting ignition-disks.service... Jul 14 22:38:40.399690 ignition[742]: Ignition 2.14.0 Jul 14 22:38:40.399701 ignition[742]: Stage: disks Jul 14 22:38:40.399797 ignition[742]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:38:40.399806 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:38:40.401006 ignition[742]: disks: disks passed Jul 14 22:38:40.401059 ignition[742]: Ignition finished successfully Jul 14 22:38:40.404700 systemd[1]: Finished ignition-disks.service. Jul 14 22:38:40.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:40.405261 systemd[1]: Reached target initrd-root-device.target. Jul 14 22:38:40.406509 systemd[1]: Reached target local-fs-pre.target. Jul 14 22:38:40.406814 systemd[1]: Reached target local-fs.target. Jul 14 22:38:40.409489 systemd[1]: Reached target sysinit.target. Jul 14 22:38:40.410820 systemd[1]: Reached target basic.target. Jul 14 22:38:40.412848 systemd[1]: Starting systemd-fsck-root.service... Jul 14 22:38:40.420721 systemd-resolved[200]: Detected conflict on linux IN A 10.0.0.12 Jul 14 22:38:40.420734 systemd-resolved[200]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jul 14 22:38:40.448181 systemd-fsck[750]: ROOT: clean, 619/553520 files, 56023/553472 blocks Jul 14 22:38:41.026707 systemd[1]: Finished systemd-fsck-root.service. Jul 14 22:38:41.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:41.027973 systemd[1]: Mounting sysroot.mount... Jul 14 22:38:41.062908 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 14 22:38:41.063154 systemd[1]: Mounted sysroot.mount. Jul 14 22:38:41.063627 systemd[1]: Reached target initrd-root-fs.target. Jul 14 22:38:41.066236 systemd[1]: Mounting sysroot-usr.mount... Jul 14 22:38:41.066879 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 14 22:38:41.066915 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:38:41.066938 systemd[1]: Reached target ignition-diskful.target. Jul 14 22:38:41.068958 systemd[1]: Mounted sysroot-usr.mount. Jul 14 22:38:41.074445 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 14 22:38:41.077239 systemd[1]: Starting initrd-setup-root.service... Jul 14 22:38:41.081441 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:38:41.089997 initrd-setup-root[769]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:38:41.092792 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:38:41.096287 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:38:41.109907 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (756) Jul 14 22:38:41.112473 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:38:41.112509 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:38:41.112522 kernel: BTRFS info (device vda6): has skinny extents Jul 14 22:38:41.116640 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 14 22:38:41.132821 systemd[1]: Finished initrd-setup-root.service. Jul 14 22:38:41.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:41.134034 systemd[1]: Starting ignition-mount.service... Jul 14 22:38:41.136027 systemd[1]: Starting sysroot-boot.service... Jul 14 22:38:41.140570 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 14 22:38:41.140662 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 14 22:38:41.163208 systemd[1]: Finished sysroot-boot.service. Jul 14 22:38:41.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:41.367485 ignition[825]: INFO : Ignition 2.14.0 Jul 14 22:38:41.367485 ignition[825]: INFO : Stage: mount Jul 14 22:38:41.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:38:41.388052 ignition[825]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:38:41.388052 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:38:41.388052 ignition[825]: INFO : mount: mount passed Jul 14 22:38:41.388052 ignition[825]: INFO : Ignition finished successfully Jul 14 22:38:41.369240 systemd[1]: Finished ignition-mount.service. Jul 14 22:38:41.387205 systemd[1]: Starting ignition-files.service... Jul 14 22:38:41.393208 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 14 22:38:41.399907 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (832) Jul 14 22:38:41.399933 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:38:41.401651 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:38:41.401665 kernel: BTRFS info (device vda6): has skinny extents Jul 14 22:38:41.405424 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 14 22:38:41.416300 ignition[851]: INFO : Ignition 2.14.0 Jul 14 22:38:41.416300 ignition[851]: INFO : Stage: files Jul 14 22:38:41.418175 ignition[851]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:38:41.418175 ignition[851]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:38:41.510418 ignition[851]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:38:41.510418 ignition[851]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:38:41.510418 ignition[851]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:38:41.515526 ignition[851]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:38:41.515526 ignition[851]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:38:41.515526 ignition[851]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:38:41.515526 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:38:41.515526 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:38:41.515526 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:38:41.515526 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 14 22:38:41.514716 unknown[851]: wrote ssh authorized keys file for user: core Jul 14 22:38:41.792079 systemd-networkd[715]: eth0: Gained IPv6LL Jul 14 22:38:46.595477 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 22:38:46.978330 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:38:46.982036 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 22:38:46.982036 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 14 22:38:52.320379 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 14 22:38:52.510120 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 22:38:52.510120 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:38:52.514490 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:38:52.514490 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:38:52.514490 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:38:52.514490 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:38:52.514490 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:38:52.514490 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:38:52.514490 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:38:52.514490 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:38:52.514490 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:38:52.514490 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:38:52.514490 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:38:52.514490 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:38:52.514490 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 14 22:39:03.049677 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 14 22:39:03.418979 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:39:03.418979 ignition[851]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 14 22:39:03.423055 ignition[851]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:39:03.423055 ignition[851]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:39:03.423055 ignition[851]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 14 22:39:03.423055 ignition[851]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 14 22:39:03.423055 ignition[851]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:39:03.423055 ignition[851]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:39:03.423055 ignition[851]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 14 22:39:03.423055 ignition[851]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 14 22:39:03.423055 ignition[851]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:39:03.423055 ignition[851]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:39:03.423055 ignition[851]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 14 22:39:03.423055 ignition[851]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:39:03.423055 ignition[851]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:39:03.448218 ignition[851]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:39:03.448218 ignition[851]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:39:03.448218 ignition[851]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 14 22:39:03.448218 ignition[851]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 22:39:03.448218 ignition[851]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:39:03.448218 ignition[851]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:39:03.448218 ignition[851]: INFO : files: files passed Jul 14 22:39:03.448218 ignition[851]: INFO : Ignition finished successfully Jul 14 22:39:03.481879 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 14 22:39:03.481906 kernel: audit: type=1130 audit(1752532743.451:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.481918 kernel: audit: type=1130 audit(1752532743.465:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.481937 kernel: audit: type=1130 audit(1752532743.469:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.481946 kernel: audit: type=1131 audit(1752532743.469:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.448993 systemd[1]: Finished ignition-files.service. Jul 14 22:39:03.452363 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 14 22:39:03.484161 initrd-setup-root-after-ignition[875]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 14 22:39:03.460655 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 14 22:39:03.487484 initrd-setup-root-after-ignition[877]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:39:03.461555 systemd[1]: Starting ignition-quench.service... Jul 14 22:39:03.462881 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 14 22:39:03.465334 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:39:03.465399 systemd[1]: Finished ignition-quench.service. Jul 14 22:39:03.470851 systemd[1]: Reached target ignition-complete.target. Jul 14 22:39:03.480596 systemd[1]: Starting initrd-parse-etc.service... Jul 14 22:39:03.494307 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:39:03.494418 systemd[1]: Finished initrd-parse-etc.service. Jul 14 22:39:03.503491 kernel: audit: type=1130 audit(1752532743.496:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.503521 kernel: audit: type=1131 audit(1752532743.496:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.496287 systemd[1]: Reached target initrd-fs.target. Jul 14 22:39:03.503541 systemd[1]: Reached target initrd.target. Jul 14 22:39:03.504512 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 14 22:39:03.505413 systemd[1]: Starting dracut-pre-pivot.service... Jul 14 22:39:03.516204 systemd[1]: Finished dracut-pre-pivot.service. Jul 14 22:39:03.521287 kernel: audit: type=1130 audit(1752532743.517:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.517927 systemd[1]: Starting initrd-cleanup.service... Jul 14 22:39:03.526274 systemd[1]: Stopped target nss-lookup.target. Jul 14 22:39:03.527336 systemd[1]: Stopped target remote-cryptsetup.target. Jul 14 22:39:03.529299 systemd[1]: Stopped target timers.target. Jul 14 22:39:03.531108 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:39:03.537409 kernel: audit: type=1131 audit(1752532743.532:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.531222 systemd[1]: Stopped dracut-pre-pivot.service. Jul 14 22:39:03.532905 systemd[1]: Stopped target initrd.target. Jul 14 22:39:03.537617 systemd[1]: Stopped target basic.target. Jul 14 22:39:03.539079 systemd[1]: Stopped target ignition-complete.target. Jul 14 22:39:03.540695 systemd[1]: Stopped target ignition-diskful.target. Jul 14 22:39:03.542483 systemd[1]: Stopped target initrd-root-device.target. Jul 14 22:39:03.544486 systemd[1]: Stopped target remote-fs.target. Jul 14 22:39:03.546309 systemd[1]: Stopped target remote-fs-pre.target. Jul 14 22:39:03.548187 systemd[1]: Stopped target sysinit.target. Jul 14 22:39:03.549704 systemd[1]: Stopped target local-fs.target. Jul 14 22:39:03.551231 systemd[1]: Stopped target local-fs-pre.target. Jul 14 22:39:03.552788 systemd[1]: Stopped target swap.target. Jul 14 22:39:03.560628 kernel: audit: type=1131 audit(1752532743.556:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.554353 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:39:03.554470 systemd[1]: Stopped dracut-pre-mount.service. Jul 14 22:39:03.567216 kernel: audit: type=1131 audit(1752532743.562:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.556179 systemd[1]: Stopped target cryptsetup.target. Jul 14 22:39:03.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.560658 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:39:03.560742 systemd[1]: Stopped dracut-initqueue.service. Jul 14 22:39:03.562717 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:39:03.562802 systemd[1]: Stopped ignition-fetch-offline.service. Jul 14 22:39:03.567392 systemd[1]: Stopped target paths.target. Jul 14 22:39:03.569013 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:39:03.572920 systemd[1]: Stopped systemd-ask-password-console.path. Jul 14 22:39:03.574273 systemd[1]: Stopped target slices.target. Jul 14 22:39:03.575903 systemd[1]: Stopped target sockets.target. Jul 14 22:39:03.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.577991 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:39:03.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.578087 systemd[1]: Closed iscsid.socket. Jul 14 22:39:03.579701 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:39:03.579839 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 14 22:39:03.581603 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:39:03.581725 systemd[1]: Stopped ignition-files.service. Jul 14 22:39:03.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.592594 ignition[892]: INFO : Ignition 2.14.0 Jul 14 22:39:03.592594 ignition[892]: INFO : Stage: umount Jul 14 22:39:03.592594 ignition[892]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:39:03.592594 ignition[892]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:39:03.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.584307 systemd[1]: Stopping ignition-mount.service... Jul 14 22:39:03.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.598593 ignition[892]: INFO : umount: umount passed Jul 14 22:39:03.598593 ignition[892]: INFO : Ignition finished successfully Jul 14 22:39:03.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.585911 systemd[1]: Stopping iscsiuio.service... Jul 14 22:39:03.588932 systemd[1]: Stopping sysroot-boot.service... Jul 14 22:39:03.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.589454 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:39:03.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.589653 systemd[1]: Stopped systemd-udev-trigger.service. Jul 14 22:39:03.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.591395 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:39:03.591547 systemd[1]: Stopped dracut-pre-trigger.service. Jul 14 22:39:03.595801 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 14 22:39:03.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.595943 systemd[1]: Stopped iscsiuio.service. Jul 14 22:39:03.598250 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:39:03.598355 systemd[1]: Stopped ignition-mount.service. Jul 14 22:39:03.599565 systemd[1]: Stopped target network.target. Jul 14 22:39:03.600481 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:39:03.600524 systemd[1]: Closed iscsiuio.socket. Jul 14 22:39:03.602167 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:39:03.602223 systemd[1]: Stopped ignition-disks.service. Jul 14 22:39:03.603320 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:39:03.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.603356 systemd[1]: Stopped ignition-kargs.service. Jul 14 22:39:03.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.603636 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:39:03.603671 systemd[1]: Stopped ignition-setup.service. Jul 14 22:39:03.606424 systemd[1]: Stopping systemd-networkd.service... Jul 14 22:39:03.607951 systemd[1]: Stopping systemd-resolved.service... Jul 14 22:39:03.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.608715 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:39:03.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.608803 systemd[1]: Finished initrd-cleanup.service. Jul 14 22:39:03.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.618222 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:39:03.630000 audit: BPF prog-id=6 op=UNLOAD Jul 14 22:39:03.618336 systemd[1]: Stopped systemd-resolved.service. Jul 14 22:39:03.619087 systemd-networkd[715]: eth0: DHCPv6 lease lost Jul 14 22:39:03.632000 audit: BPF prog-id=9 op=UNLOAD Jul 14 22:39:03.620797 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:39:03.620907 systemd[1]: Stopped systemd-networkd.service. Jul 14 22:39:03.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.622250 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:39:03.622325 systemd[1]: Closed systemd-networkd.socket. Jul 14 22:39:03.624558 systemd[1]: Stopping network-cleanup.service... Jul 14 22:39:03.625128 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:39:03.625169 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 14 22:39:03.626437 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:39:03.626470 systemd[1]: Stopped systemd-sysctl.service. Jul 14 22:39:03.629202 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:39:03.629237 systemd[1]: Stopped systemd-modules-load.service. Jul 14 22:39:03.629715 systemd[1]: Stopping systemd-udevd.service... Jul 14 22:39:03.632255 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 22:39:03.634370 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:39:03.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.634458 systemd[1]: Stopped network-cleanup.service. Jul 14 22:39:03.641873 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:39:03.644381 systemd[1]: Stopped systemd-udevd.service. Jul 14 22:39:03.650150 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:39:03.650622 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:39:03.650704 systemd[1]: Stopped sysroot-boot.service. Jul 14 22:39:03.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.652401 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:39:03.652440 systemd[1]: Closed systemd-udevd-control.socket. Jul 14 22:39:03.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.652891 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:39:03.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.652922 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 14 22:39:03.654918 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:39:03.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.654961 systemd[1]: Stopped dracut-pre-udev.service. Jul 14 22:39:03.655224 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:39:03.655257 systemd[1]: Stopped dracut-cmdline.service. Jul 14 22:39:03.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.657910 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:39:03.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.657946 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 14 22:39:03.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.659529 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:39:03.659565 systemd[1]: Stopped initrd-setup-root.service. Jul 14 22:39:03.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:03.662324 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 14 22:39:03.662851 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:39:03.662970 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 14 22:39:03.666092 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:39:03.666142 systemd[1]: Stopped kmod-static-nodes.service. Jul 14 22:39:03.666593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:39:03.666628 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 14 22:39:03.678000 audit: BPF prog-id=8 op=UNLOAD Jul 14 22:39:03.678000 audit: BPF prog-id=7 op=UNLOAD Jul 14 22:39:03.668974 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 14 22:39:03.669385 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:39:03.681000 audit: BPF prog-id=5 op=UNLOAD Jul 14 22:39:03.681000 audit: BPF prog-id=4 op=UNLOAD Jul 14 22:39:03.669467 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 14 22:39:03.681000 audit: BPF prog-id=3 op=UNLOAD Jul 14 22:39:03.670220 systemd[1]: Reached target initrd-switch-root.target. Jul 14 22:39:03.672820 systemd[1]: Starting initrd-switch-root.service... Jul 14 22:39:03.679157 systemd[1]: Switching root. Jul 14 22:39:03.696763 iscsid[721]: iscsid shutting down. Jul 14 22:39:03.697510 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Jul 14 22:39:03.697563 systemd-journald[198]: Journal stopped Jul 14 22:39:06.561508 kernel: SELinux: Class mctp_socket not defined in policy. Jul 14 22:39:06.561556 kernel: SELinux: Class anon_inode not defined in policy. Jul 14 22:39:06.565128 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 14 22:39:06.565157 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:39:06.565168 kernel: SELinux: policy capability open_perms=1 Jul 14 22:39:06.565178 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:39:06.565193 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:39:06.565202 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:39:06.565224 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:39:06.565233 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:39:06.565246 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:39:06.565260 systemd[1]: Successfully loaded SELinux policy in 39.644ms. Jul 14 22:39:06.565283 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.863ms. Jul 14 22:39:06.565294 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 22:39:06.565305 systemd[1]: Detected virtualization kvm. Jul 14 22:39:06.565314 systemd[1]: Detected architecture x86-64. Jul 14 22:39:06.565326 systemd[1]: Detected first boot. Jul 14 22:39:06.565336 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:39:06.565347 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 14 22:39:06.565357 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:39:06.565371 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 22:39:06.565383 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 22:39:06.565395 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:39:06.565409 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:39:06.565419 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 14 22:39:06.565429 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 14 22:39:06.565439 systemd[1]: Created slice system-addon\x2drun.slice. Jul 14 22:39:06.565450 systemd[1]: Created slice system-getty.slice. Jul 14 22:39:06.565460 systemd[1]: Created slice system-modprobe.slice. Jul 14 22:39:06.565470 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 14 22:39:06.565481 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 14 22:39:06.565491 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 14 22:39:06.565503 systemd[1]: Created slice user.slice. Jul 14 22:39:06.565513 systemd[1]: Started systemd-ask-password-console.path. Jul 14 22:39:06.565523 systemd[1]: Started systemd-ask-password-wall.path. Jul 14 22:39:06.565533 systemd[1]: Set up automount boot.automount. Jul 14 22:39:06.565543 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 14 22:39:06.565553 systemd[1]: Reached target integritysetup.target. Jul 14 22:39:06.565563 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 22:39:06.565574 systemd[1]: Reached target remote-fs.target. Jul 14 22:39:06.565584 systemd[1]: Reached target slices.target. Jul 14 22:39:06.565596 systemd[1]: Reached target swap.target. Jul 14 22:39:06.565606 systemd[1]: Reached target torcx.target. Jul 14 22:39:06.565616 systemd[1]: Reached target veritysetup.target. Jul 14 22:39:06.565626 systemd[1]: Listening on systemd-coredump.socket. Jul 14 22:39:06.565636 systemd[1]: Listening on systemd-initctl.socket. Jul 14 22:39:06.565647 systemd[1]: Listening on systemd-journald-audit.socket. Jul 14 22:39:06.565660 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 14 22:39:06.565674 systemd[1]: Listening on systemd-journald.socket. Jul 14 22:39:06.565688 systemd[1]: Listening on systemd-networkd.socket. Jul 14 22:39:06.565701 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 22:39:06.565712 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 22:39:06.565722 systemd[1]: Listening on systemd-userdbd.socket. Jul 14 22:39:06.565732 systemd[1]: Mounting dev-hugepages.mount... Jul 14 22:39:06.565742 systemd[1]: Mounting dev-mqueue.mount... Jul 14 22:39:06.565752 systemd[1]: Mounting media.mount... Jul 14 22:39:06.565763 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:39:06.565772 systemd[1]: Mounting sys-kernel-debug.mount... Jul 14 22:39:06.565782 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 14 22:39:06.565793 systemd[1]: Mounting tmp.mount... Jul 14 22:39:06.565804 systemd[1]: Starting flatcar-tmpfiles.service... Jul 14 22:39:06.565814 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:39:06.565824 systemd[1]: Starting kmod-static-nodes.service... Jul 14 22:39:06.565834 systemd[1]: Starting modprobe@configfs.service... Jul 14 22:39:06.565845 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:39:06.565855 systemd[1]: Starting modprobe@drm.service... Jul 14 22:39:06.565892 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:39:06.565903 systemd[1]: Starting modprobe@fuse.service... Jul 14 22:39:06.565916 systemd[1]: Starting modprobe@loop.service... Jul 14 22:39:06.565927 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:39:06.565937 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 14 22:39:06.565947 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 14 22:39:06.565957 kernel: fuse: init (API version 7.34) Jul 14 22:39:06.565966 systemd[1]: Starting systemd-journald.service... Jul 14 22:39:06.565976 systemd[1]: Starting systemd-modules-load.service... Jul 14 22:39:06.565985 kernel: loop: module loaded Jul 14 22:39:06.565995 systemd[1]: Starting systemd-network-generator.service... Jul 14 22:39:06.566007 systemd[1]: Starting systemd-remount-fs.service... Jul 14 22:39:06.566017 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 22:39:06.566028 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:39:06.566042 systemd-journald[1045]: Journal started Jul 14 22:39:06.566095 systemd-journald[1045]: Runtime Journal (/run/log/journal/3865efbcc0c74b59a29b3f117d0244bd) is 6.0M, max 48.5M, 42.5M free. Jul 14 22:39:06.419000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 14 22:39:06.419000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 14 22:39:06.559000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 14 22:39:06.559000 audit[1045]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc4b91f410 a2=4000 a3=7ffc4b91f4ac items=0 ppid=1 pid=1045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:39:06.559000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 14 22:39:06.569375 systemd[1]: Started systemd-journald.service. Jul 14 22:39:06.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.570586 systemd[1]: Mounted dev-hugepages.mount. Jul 14 22:39:06.571554 systemd[1]: Mounted dev-mqueue.mount. Jul 14 22:39:06.572428 systemd[1]: Mounted media.mount. Jul 14 22:39:06.573346 systemd[1]: Mounted sys-kernel-debug.mount. Jul 14 22:39:06.574373 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 14 22:39:06.575490 systemd[1]: Mounted tmp.mount. Jul 14 22:39:06.576771 systemd[1]: Finished flatcar-tmpfiles.service. Jul 14 22:39:06.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.578169 systemd[1]: Finished kmod-static-nodes.service. Jul 14 22:39:06.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.579332 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:39:06.579523 systemd[1]: Finished modprobe@configfs.service. Jul 14 22:39:06.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.580783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:39:06.581134 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:39:06.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.582422 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:39:06.582671 systemd[1]: Finished modprobe@drm.service. Jul 14 22:39:06.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.583832 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:39:06.583981 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:39:06.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.585093 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:39:06.585314 systemd[1]: Finished modprobe@fuse.service. Jul 14 22:39:06.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.586487 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:39:06.586780 systemd[1]: Finished modprobe@loop.service. Jul 14 22:39:06.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.600344 systemd[1]: Finished systemd-modules-load.service. Jul 14 22:39:06.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.601642 systemd[1]: Finished systemd-network-generator.service. Jul 14 22:39:06.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.602888 systemd[1]: Finished systemd-remount-fs.service. Jul 14 22:39:06.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.604137 systemd[1]: Reached target network-pre.target. Jul 14 22:39:06.605942 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 14 22:39:06.607454 systemd[1]: Mounting sys-kernel-config.mount... Jul 14 22:39:06.608225 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:39:06.609643 systemd[1]: Starting systemd-hwdb-update.service... Jul 14 22:39:06.611323 systemd[1]: Starting systemd-journal-flush.service... Jul 14 22:39:06.612195 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:39:06.613106 systemd[1]: Starting systemd-random-seed.service... Jul 14 22:39:06.613946 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 22:39:06.615322 systemd[1]: Starting systemd-sysctl.service... Jul 14 22:39:06.625986 systemd-journald[1045]: Time spent on flushing to /var/log/journal/3865efbcc0c74b59a29b3f117d0244bd is 17.913ms for 1048 entries. Jul 14 22:39:06.625986 systemd-journald[1045]: System Journal (/var/log/journal/3865efbcc0c74b59a29b3f117d0244bd) is 8.0M, max 195.6M, 187.6M free. Jul 14 22:39:06.874852 systemd-journald[1045]: Received client request to flush runtime journal. Jul 14 22:39:06.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:06.617754 systemd[1]: Starting systemd-sysusers.service... Jul 14 22:39:06.633817 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 22:39:06.634908 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 14 22:39:06.875620 udevadm[1077]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 14 22:39:06.635950 systemd[1]: Mounted sys-kernel-config.mount. Jul 14 22:39:06.637916 systemd[1]: Starting systemd-udev-settle.service... Jul 14 22:39:06.695342 systemd[1]: Finished systemd-sysctl.service. Jul 14 22:39:06.697660 systemd[1]: Finished systemd-sysusers.service. Jul 14 22:39:06.699812 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 22:39:06.716437 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 22:39:06.836738 systemd[1]: Finished systemd-random-seed.service. Jul 14 22:39:06.838174 systemd[1]: Reached target first-boot-complete.target. Jul 14 22:39:06.876073 systemd[1]: Finished systemd-journal-flush.service. Jul 14 22:39:06.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:07.190397 systemd[1]: Finished systemd-hwdb-update.service. Jul 14 22:39:07.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:07.192574 systemd[1]: Starting systemd-udevd.service... Jul 14 22:39:07.210934 systemd-udevd[1089]: Using default interface naming scheme 'v252'. Jul 14 22:39:07.225853 systemd[1]: Started systemd-udevd.service. Jul 14 22:39:07.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:07.229855 systemd[1]: Starting systemd-networkd.service... Jul 14 22:39:07.242820 systemd[1]: Starting systemd-userdbd.service... Jul 14 22:39:07.274660 systemd[1]: Found device dev-ttyS0.device. Jul 14 22:39:07.281046 systemd[1]: Started systemd-userdbd.service. Jul 14 22:39:07.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:07.292750 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 22:39:07.306883 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 14 22:39:07.312901 kernel: ACPI: button: Power Button [PWRF] Jul 14 22:39:07.333752 systemd-networkd[1100]: lo: Link UP Jul 14 22:39:07.333765 systemd-networkd[1100]: lo: Gained carrier Jul 14 22:39:07.334217 systemd-networkd[1100]: Enumeration completed Jul 14 22:39:07.334326 systemd[1]: Started systemd-networkd.service. Jul 14 22:39:07.335301 systemd-networkd[1100]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:39:07.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:07.336488 systemd-networkd[1100]: eth0: Link UP Jul 14 22:39:07.336493 systemd-networkd[1100]: eth0: Gained carrier Jul 14 22:39:07.330000 audit[1103]: AVC avc: denied { confidentiality } for pid=1103 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 14 22:39:07.330000 audit[1103]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564310a418d0 a1=338ac a2=7ff80b26fbc5 a3=5 items=110 ppid=1089 pid=1103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:39:07.330000 audit: CWD cwd="/" Jul 14 22:39:07.330000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=1 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=2 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=3 name=(null) inode=15404 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=4 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=5 name=(null) inode=15405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=6 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=7 name=(null) inode=15406 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=8 name=(null) inode=15406 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=9 name=(null) inode=15407 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=10 name=(null) inode=15406 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=11 name=(null) inode=15408 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=12 name=(null) inode=15406 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=13 name=(null) inode=15409 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=14 name=(null) inode=15406 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=15 name=(null) inode=15410 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=16 name=(null) inode=15406 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=17 name=(null) inode=15411 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=18 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=19 name=(null) inode=15412 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=20 name=(null) inode=15412 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=21 name=(null) inode=15413 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=22 name=(null) inode=15412 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=23 name=(null) inode=15414 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=24 name=(null) inode=15412 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=25 name=(null) inode=15415 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=26 name=(null) inode=15412 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=27 name=(null) inode=15416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=28 name=(null) inode=15412 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=29 name=(null) inode=15417 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=30 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=31 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=32 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=33 name=(null) inode=15419 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=34 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=35 name=(null) inode=15420 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=36 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=37 name=(null) inode=15421 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=38 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=39 name=(null) inode=15422 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=40 name=(null) inode=15418 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=41 name=(null) inode=15423 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=42 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=43 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=44 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=45 name=(null) inode=15425 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=46 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=47 name=(null) inode=15426 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=48 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=49 name=(null) inode=15427 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=50 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=51 name=(null) inode=15428 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=52 name=(null) inode=15424 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=53 name=(null) inode=15429 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=55 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=56 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=57 name=(null) inode=15431 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=58 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=59 name=(null) inode=15432 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=60 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=61 name=(null) inode=15433 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=62 name=(null) inode=15433 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=63 name=(null) inode=15434 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=64 name=(null) inode=15433 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=65 name=(null) inode=15435 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=66 name=(null) inode=15433 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=67 name=(null) inode=15436 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=68 name=(null) inode=15433 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=69 name=(null) inode=15437 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=70 name=(null) inode=15433 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=71 name=(null) inode=15438 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=72 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=73 name=(null) inode=15439 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=74 name=(null) inode=15439 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=75 name=(null) inode=15440 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=76 name=(null) inode=15439 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=77 name=(null) inode=15441 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=78 name=(null) inode=15439 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=79 name=(null) inode=15442 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=80 name=(null) inode=15439 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=81 name=(null) inode=15443 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=82 name=(null) inode=15439 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=83 name=(null) inode=15444 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=84 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=85 name=(null) inode=15445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=86 name=(null) inode=15445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=87 name=(null) inode=15446 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=88 name=(null) inode=15445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=89 name=(null) inode=15447 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=90 name=(null) inode=15445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=91 name=(null) inode=15448 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=92 name=(null) inode=15445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=93 name=(null) inode=15449 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=94 name=(null) inode=15445 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=95 name=(null) inode=15450 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=96 name=(null) inode=15430 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=97 name=(null) inode=15451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=98 name=(null) inode=15451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=99 name=(null) inode=15452 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=100 name=(null) inode=15451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=101 name=(null) inode=15453 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=102 name=(null) inode=15451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=103 name=(null) inode=15454 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=104 name=(null) inode=15451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=105 name=(null) inode=15455 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=106 name=(null) inode=15451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=107 name=(null) inode=15456 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PATH item=109 name=(null) inode=15457 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:39:07.330000 audit: PROCTITLE proctitle="(udev-worker)" Jul 14 22:39:07.349047 systemd-networkd[1100]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:39:07.356465 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 14 22:39:07.358069 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 14 22:39:07.358203 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 14 22:39:07.372902 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 14 22:39:07.375909 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 22:39:07.470181 kernel: kvm: Nested Virtualization enabled Jul 14 22:39:07.470320 kernel: SVM: kvm: Nested Paging enabled Jul 14 22:39:07.470337 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 14 22:39:07.470353 kernel: SVM: Virtual GIF supported Jul 14 22:39:07.497899 kernel: EDAC MC: Ver: 3.0.0 Jul 14 22:39:07.523422 systemd[1]: Finished systemd-udev-settle.service. Jul 14 22:39:07.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:07.525879 systemd[1]: Starting lvm2-activation-early.service... Jul 14 22:39:07.533983 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:39:07.556010 systemd[1]: Finished lvm2-activation-early.service. Jul 14 22:39:07.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:07.557191 systemd[1]: Reached target cryptsetup.target. Jul 14 22:39:07.559400 systemd[1]: Starting lvm2-activation.service... Jul 14 22:39:07.563116 lvm[1127]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:39:07.587827 systemd[1]: Finished lvm2-activation.service. Jul 14 22:39:07.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:07.589259 systemd[1]: Reached target local-fs-pre.target. Jul 14 22:39:07.590092 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:39:07.590112 systemd[1]: Reached target local-fs.target. Jul 14 22:39:07.590873 systemd[1]: Reached target machines.target. Jul 14 22:39:07.592670 systemd[1]: Starting ldconfig.service... Jul 14 22:39:07.593629 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:39:07.593663 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:39:07.594608 systemd[1]: Starting systemd-boot-update.service... Jul 14 22:39:07.596261 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 14 22:39:07.598293 systemd[1]: Starting systemd-machine-id-commit.service... Jul 14 22:39:07.600334 systemd[1]: Starting systemd-sysext.service... Jul 14 22:39:07.601782 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1130 (bootctl) Jul 14 22:39:07.602974 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 14 22:39:07.605308 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 14 22:39:07.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:07.613110 systemd[1]: Unmounting usr-share-oem.mount... Jul 14 22:39:07.617204 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 14 22:39:07.617439 systemd[1]: Unmounted usr-share-oem.mount. Jul 14 22:39:07.628896 kernel: loop0: detected capacity change from 0 to 221472 Jul 14 22:39:07.650899 systemd-fsck[1142]: fsck.fat 4.2 (2021-01-31) Jul 14 22:39:07.650899 systemd-fsck[1142]: /dev/vda1: 790 files, 120725/258078 clusters Jul 14 22:39:07.651771 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 14 22:39:07.654508 systemd[1]: Mounting boot.mount... Jul 14 22:39:07.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:07.700779 ldconfig[1129]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:39:07.721567 systemd[1]: Mounted boot.mount. Jul 14 22:39:08.570901 systemd[1]: Finished systemd-boot-update.service. Jul 14 22:39:08.580242 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:39:08.580294 kernel: kauditd_printk_skb: 199 callbacks suppressed Jul 14 22:39:08.580316 kernel: audit: type=1130 audit(1752532748.572:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.595900 kernel: loop1: detected capacity change from 0 to 221472 Jul 14 22:39:08.630266 (sd-sysext)[1150]: Using extensions 'kubernetes'. Jul 14 22:39:08.630635 (sd-sysext)[1150]: Merged extensions into '/usr'. Jul 14 22:39:08.645573 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:39:08.646805 systemd[1]: Mounting usr-share-oem.mount... Jul 14 22:39:08.647727 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:39:08.648613 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:39:08.650261 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:39:08.651993 systemd[1]: Starting modprobe@loop.service... Jul 14 22:39:08.652770 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:39:08.652886 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:39:08.652980 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:39:08.655721 systemd[1]: Mounted usr-share-oem.mount. Jul 14 22:39:08.656787 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:39:08.656939 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:39:08.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.672528 kernel: audit: type=1130 audit(1752532748.661:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.672572 kernel: audit: type=1131 audit(1752532748.661:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.672589 kernel: audit: type=1130 audit(1752532748.668:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.672611 kernel: audit: type=1131 audit(1752532748.668:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.662333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:39:08.662455 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:39:08.669399 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:39:08.669523 systemd[1]: Finished modprobe@loop.service. Jul 14 22:39:08.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.676941 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:39:08.677052 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 22:39:08.678019 systemd[1]: Finished systemd-sysext.service. Jul 14 22:39:08.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.680265 kernel: audit: type=1130 audit(1752532748.676:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.680293 kernel: audit: type=1131 audit(1752532748.676:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.686395 kernel: audit: type=1130 audit(1752532748.682:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.686519 systemd[1]: Starting ensure-sysext.service... Jul 14 22:39:08.688085 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 14 22:39:08.693540 systemd[1]: Reloading. Jul 14 22:39:08.710113 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 14 22:39:08.711030 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:39:08.712984 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:39:08.742251 /usr/lib/systemd/system-generators/torcx-generator[1183]: time="2025-07-14T22:39:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 22:39:08.742276 /usr/lib/systemd/system-generators/torcx-generator[1183]: time="2025-07-14T22:39:08Z" level=info msg="torcx already run" Jul 14 22:39:08.853442 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 22:39:08.853462 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 22:39:08.875333 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:39:08.930409 systemd[1]: Finished ldconfig.service. Jul 14 22:39:08.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.932213 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 14 22:39:08.935903 kernel: audit: type=1130 audit(1752532748.930:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.935975 kernel: audit: type=1130 audit(1752532748.934:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.939454 systemd[1]: Starting audit-rules.service... Jul 14 22:39:08.941223 systemd[1]: Starting clean-ca-certificates.service... Jul 14 22:39:08.943340 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 14 22:39:08.945678 systemd[1]: Starting systemd-resolved.service... Jul 14 22:39:08.948071 systemd[1]: Starting systemd-timesyncd.service... Jul 14 22:39:08.949986 systemd[1]: Starting systemd-update-utmp.service... Jul 14 22:39:08.951818 systemd[1]: Finished clean-ca-certificates.service. Jul 14 22:39:08.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.954803 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:39:08.955095 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:39:08.956436 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:39:08.958442 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:39:08.961650 systemd[1]: Starting modprobe@loop.service... Jul 14 22:39:08.961000 audit[1241]: SYSTEM_BOOT pid=1241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.962543 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:39:08.962690 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:39:08.962844 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:39:08.962962 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:39:08.964747 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:39:08.964928 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:39:08.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.966495 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:39:08.966654 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:39:08.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.968007 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:39:08.968235 systemd[1]: Finished modprobe@loop.service. Jul 14 22:39:08.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.974824 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:39:08.975153 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:39:08.977175 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:39:08.979754 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:39:08.982107 systemd[1]: Starting modprobe@loop.service... Jul 14 22:39:08.982952 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:39:08.983108 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:39:08.983243 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:39:08.983341 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:39:08.985126 systemd[1]: Finished systemd-update-utmp.service. Jul 14 22:39:08.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.987076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:39:08.987258 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:39:08.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.988721 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:39:08.988908 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:39:08.990546 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:39:08.990732 systemd[1]: Finished modprobe@loop.service. Jul 14 22:39:08.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:08.996059 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:39:08.996358 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:39:08.998313 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:39:09.000425 systemd[1]: Starting modprobe@drm.service... Jul 14 22:39:09.002689 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:39:09.005121 systemd[1]: Starting modprobe@loop.service... Jul 14 22:39:09.006056 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:39:09.006207 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:39:09.008047 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 14 22:39:09.012124 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:39:09.012280 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:39:09.015523 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:39:09.017148 systemd[1]: Finished systemd-machine-id-commit.service. Jul 14 22:39:09.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:09.018727 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 14 22:39:09.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:09.020205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:39:09.020404 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:39:09.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:09.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:09.021626 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:39:09.021792 systemd[1]: Finished modprobe@drm.service. Jul 14 22:39:09.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:09.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:39:09.023409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:39:09.023550 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:39:09.023000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 14 22:39:09.023000 audit[1282]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcd5e9a5e0 a2=420 a3=0 items=0 ppid=1235 pid=1282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:39:09.023000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 14 22:39:09.024338 augenrules[1282]: No rules Jul 14 22:39:09.025068 systemd[1]: Finished audit-rules.service. Jul 14 22:39:09.026279 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:39:09.026530 systemd[1]: Finished modprobe@loop.service. Jul 14 22:39:09.028045 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:39:09.028173 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 22:39:09.029758 systemd[1]: Starting systemd-update-done.service... Jul 14 22:39:09.031610 systemd[1]: Finished ensure-sysext.service. Jul 14 22:39:09.037472 systemd[1]: Finished systemd-update-done.service. Jul 14 22:39:09.056334 systemd-resolved[1239]: Positive Trust Anchors: Jul 14 22:39:09.056351 systemd-resolved[1239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:39:09.056403 systemd-resolved[1239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 22:39:09.056964 systemd-networkd[1100]: eth0: Gained IPv6LL Jul 14 22:39:09.062816 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 14 22:39:09.065668 systemd-resolved[1239]: Defaulting to hostname 'linux'. Jul 14 22:39:09.066826 systemd[1]: Started systemd-timesyncd.service. Jul 14 22:39:09.510154 systemd-resolved[1239]: Clock change detected. Flushing caches. Jul 14 22:39:09.510188 systemd-timesyncd[1240]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 22:39:09.510230 systemd-timesyncd[1240]: Initial clock synchronization to Mon 2025-07-14 22:39:09.510109 UTC. Jul 14 22:39:09.510836 systemd[1]: Started systemd-resolved.service. Jul 14 22:39:09.511926 systemd[1]: Reached target network.target. Jul 14 22:39:09.512761 systemd[1]: Reached target network-online.target. Jul 14 22:39:09.513629 systemd[1]: Reached target nss-lookup.target. Jul 14 22:39:09.514450 systemd[1]: Reached target sysinit.target. Jul 14 22:39:09.515301 systemd[1]: Started motdgen.path. Jul 14 22:39:09.516009 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 14 22:39:09.517110 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 14 22:39:09.517958 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:39:09.517981 systemd[1]: Reached target paths.target. Jul 14 22:39:09.518768 systemd[1]: Reached target time-set.target. Jul 14 22:39:09.519705 systemd[1]: Started logrotate.timer. Jul 14 22:39:09.520507 systemd[1]: Started mdadm.timer. Jul 14 22:39:09.521179 systemd[1]: Reached target timers.target. Jul 14 22:39:09.522219 systemd[1]: Listening on dbus.socket. Jul 14 22:39:09.524025 systemd[1]: Starting docker.socket... Jul 14 22:39:09.525946 systemd[1]: Listening on sshd.socket. Jul 14 22:39:09.526980 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:39:09.527316 systemd[1]: Listening on docker.socket. Jul 14 22:39:09.528240 systemd[1]: Reached target sockets.target. Jul 14 22:39:09.529181 systemd[1]: Reached target basic.target. Jul 14 22:39:09.530138 systemd[1]: System is tainted: cgroupsv1 Jul 14 22:39:09.530175 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 22:39:09.530193 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 22:39:09.531000 systemd[1]: Starting containerd.service... Jul 14 22:39:09.532938 systemd[1]: Starting dbus.service... Jul 14 22:39:09.534946 systemd[1]: Starting enable-oem-cloudinit.service... Jul 14 22:39:09.537225 systemd[1]: Starting extend-filesystems.service... Jul 14 22:39:09.538314 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 14 22:39:09.541453 jq[1299]: false Jul 14 22:39:09.539603 systemd[1]: Starting kubelet.service... Jul 14 22:39:09.541535 systemd[1]: Starting motdgen.service... Jul 14 22:39:09.543589 systemd[1]: Starting prepare-helm.service... Jul 14 22:39:09.545686 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 14 22:39:09.547877 systemd[1]: Starting sshd-keygen.service... Jul 14 22:39:09.551115 systemd[1]: Starting systemd-logind.service... Jul 14 22:39:09.552383 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:39:09.552471 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:39:09.553794 systemd[1]: Starting update-engine.service... Jul 14 22:39:09.556645 extend-filesystems[1300]: Found loop1 Jul 14 22:39:09.558760 extend-filesystems[1300]: Found sr0 Jul 14 22:39:09.558760 extend-filesystems[1300]: Found vda Jul 14 22:39:09.558760 extend-filesystems[1300]: Found vda1 Jul 14 22:39:09.558760 extend-filesystems[1300]: Found vda2 Jul 14 22:39:09.558760 extend-filesystems[1300]: Found vda3 Jul 14 22:39:09.558760 extend-filesystems[1300]: Found usr Jul 14 22:39:09.558760 extend-filesystems[1300]: Found vda4 Jul 14 22:39:09.558760 extend-filesystems[1300]: Found vda6 Jul 14 22:39:09.558760 extend-filesystems[1300]: Found vda7 Jul 14 22:39:09.558760 extend-filesystems[1300]: Found vda9 Jul 14 22:39:09.558760 extend-filesystems[1300]: Checking size of /dev/vda9 Jul 14 22:39:09.560575 dbus-daemon[1298]: [system] SELinux support is enabled Jul 14 22:39:09.561342 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 14 22:39:09.566747 systemd[1]: Started dbus.service. Jul 14 22:39:09.576518 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:39:09.578657 jq[1322]: true Jul 14 22:39:09.580523 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 14 22:39:09.581846 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:39:09.586456 systemd[1]: Finished motdgen.service. Jul 14 22:39:09.589100 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:39:09.589463 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 14 22:39:09.593827 update_engine[1317]: I0714 22:39:09.590224 1317 main.cc:92] Flatcar Update Engine starting Jul 14 22:39:09.593749 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:39:09.593770 systemd[1]: Reached target system-config.target. Jul 14 22:39:09.595076 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:39:09.595100 systemd[1]: Reached target user-config.target. Jul 14 22:39:09.595951 jq[1332]: true Jul 14 22:39:09.604307 tar[1331]: linux-amd64/helm Jul 14 22:39:09.606311 extend-filesystems[1300]: Resized partition /dev/vda9 Jul 14 22:39:09.607399 update_engine[1317]: I0714 22:39:09.607059 1317 update_check_scheduler.cc:74] Next update check in 9m39s Jul 14 22:39:09.608050 systemd[1]: Started update-engine.service. Jul 14 22:39:09.610141 extend-filesystems[1346]: resize2fs 1.46.5 (30-Dec-2021) Jul 14 22:39:09.613065 systemd[1]: Started locksmithd.service. Jul 14 22:39:09.616665 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 22:39:09.645444 systemd-logind[1314]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 22:39:09.645709 systemd-logind[1314]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 22:39:09.647182 systemd-logind[1314]: New seat seat0. Jul 14 22:39:09.651654 env[1334]: time="2025-07-14T22:39:09.651598216Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 14 22:39:09.653299 systemd[1]: Started systemd-logind.service. Jul 14 22:39:09.673596 env[1334]: time="2025-07-14T22:39:09.673543095Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 22:39:09.673885 env[1334]: time="2025-07-14T22:39:09.673868014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:39:09.675051 env[1334]: time="2025-07-14T22:39:09.675026275Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.187-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:39:09.675138 env[1334]: time="2025-07-14T22:39:09.675119911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:39:09.675450 env[1334]: time="2025-07-14T22:39:09.675430273Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:39:09.675524 env[1334]: time="2025-07-14T22:39:09.675505534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 22:39:09.675600 env[1334]: time="2025-07-14T22:39:09.675580384Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 14 22:39:09.675670 env[1334]: time="2025-07-14T22:39:09.675652079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 22:39:09.675815 env[1334]: time="2025-07-14T22:39:09.675798172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:39:09.676079 env[1334]: time="2025-07-14T22:39:09.676061917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:39:09.676289 env[1334]: time="2025-07-14T22:39:09.676258856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:39:09.676361 env[1334]: time="2025-07-14T22:39:09.676343314Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 22:39:09.676480 env[1334]: time="2025-07-14T22:39:09.676461536Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 14 22:39:09.676560 env[1334]: time="2025-07-14T22:39:09.676541636Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:39:09.891300 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 22:39:10.050702 locksmithd[1347]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:39:10.538426 extend-filesystems[1346]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 22:39:10.538426 extend-filesystems[1346]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 22:39:10.538426 extend-filesystems[1346]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 22:39:10.544299 extend-filesystems[1300]: Resized filesystem in /dev/vda9 Jul 14 22:39:10.539027 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:39:10.539318 systemd[1]: Finished extend-filesystems.service. Jul 14 22:39:10.552231 env[1334]: time="2025-07-14T22:39:10.551798049Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 22:39:10.552231 env[1334]: time="2025-07-14T22:39:10.551880033Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 22:39:10.552231 env[1334]: time="2025-07-14T22:39:10.551893288Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 22:39:10.552231 env[1334]: time="2025-07-14T22:39:10.551924717Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 22:39:10.552231 env[1334]: time="2025-07-14T22:39:10.551937731Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 22:39:10.552231 env[1334]: time="2025-07-14T22:39:10.551950806Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 22:39:10.552231 env[1334]: time="2025-07-14T22:39:10.551962788Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 22:39:10.552231 env[1334]: time="2025-07-14T22:39:10.551975302Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 22:39:10.552231 env[1334]: time="2025-07-14T22:39:10.551987264Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 14 22:39:10.552231 env[1334]: time="2025-07-14T22:39:10.552000429Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 22:39:10.552231 env[1334]: time="2025-07-14T22:39:10.552011329Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 22:39:10.552231 env[1334]: time="2025-07-14T22:39:10.552022420Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 22:39:10.552231 env[1334]: time="2025-07-14T22:39:10.552214831Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 22:39:10.552638 env[1334]: time="2025-07-14T22:39:10.552299329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 22:39:10.552638 env[1334]: time="2025-07-14T22:39:10.552586578Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 22:39:10.552638 env[1334]: time="2025-07-14T22:39:10.552608098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 22:39:10.552638 env[1334]: time="2025-07-14T22:39:10.552619209Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 22:39:10.552736 env[1334]: time="2025-07-14T22:39:10.552666698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 22:39:10.552736 env[1334]: time="2025-07-14T22:39:10.552678249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 22:39:10.552736 env[1334]: time="2025-07-14T22:39:10.552689761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 22:39:10.552736 env[1334]: time="2025-07-14T22:39:10.552700241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 22:39:10.552736 env[1334]: time="2025-07-14T22:39:10.552710189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 22:39:10.552736 env[1334]: time="2025-07-14T22:39:10.552721170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 22:39:10.552736 env[1334]: time="2025-07-14T22:39:10.552730768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 22:39:10.552736 env[1334]: time="2025-07-14T22:39:10.552740857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 22:39:10.552932 env[1334]: time="2025-07-14T22:39:10.552752959Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 22:39:10.552932 env[1334]: time="2025-07-14T22:39:10.552849701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 22:39:10.552932 env[1334]: time="2025-07-14T22:39:10.552862254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 22:39:10.552932 env[1334]: time="2025-07-14T22:39:10.552872203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 22:39:10.552932 env[1334]: time="2025-07-14T22:39:10.552899845Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 22:39:10.552932 env[1334]: time="2025-07-14T22:39:10.552912889Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 14 22:39:10.552932 env[1334]: time="2025-07-14T22:39:10.552923279Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 22:39:10.552932 env[1334]: time="2025-07-14T22:39:10.552939269Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 14 22:39:10.553161 env[1334]: time="2025-07-14T22:39:10.552971449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 22:39:10.553525 env[1334]: time="2025-07-14T22:39:10.553257786Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 22:39:10.553525 env[1334]: time="2025-07-14T22:39:10.553453763Z" level=info msg="Connect containerd service" Jul 14 22:39:10.555358 env[1334]: time="2025-07-14T22:39:10.554527276Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 22:39:10.555358 env[1334]: time="2025-07-14T22:39:10.555077126Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:39:10.555358 env[1334]: time="2025-07-14T22:39:10.555325031Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:39:10.555476 bash[1360]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:39:10.555392 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 14 22:39:10.556755 env[1334]: time="2025-07-14T22:39:10.555371999Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:39:10.556755 env[1334]: time="2025-07-14T22:39:10.555433585Z" level=info msg="containerd successfully booted in 0.915295s" Jul 14 22:39:10.556961 systemd[1]: Started containerd.service. Jul 14 22:39:10.569297 env[1334]: time="2025-07-14T22:39:10.566575071Z" level=info msg="Start subscribing containerd event" Jul 14 22:39:10.569297 env[1334]: time="2025-07-14T22:39:10.566664208Z" level=info msg="Start recovering state" Jul 14 22:39:10.569297 env[1334]: time="2025-07-14T22:39:10.566740531Z" level=info msg="Start event monitor" Jul 14 22:39:10.569297 env[1334]: time="2025-07-14T22:39:10.566754858Z" level=info msg="Start snapshots syncer" Jul 14 22:39:10.569297 env[1334]: time="2025-07-14T22:39:10.566766840Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:39:10.569297 env[1334]: time="2025-07-14T22:39:10.566776238Z" level=info msg="Start streaming server" Jul 14 22:39:10.633795 sshd_keygen[1325]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:39:10.651846 systemd[1]: Finished sshd-keygen.service. Jul 14 22:39:10.687645 systemd[1]: Starting issuegen.service... Jul 14 22:39:10.694004 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:39:10.694318 systemd[1]: Finished issuegen.service. Jul 14 22:39:10.700418 systemd[1]: Starting systemd-user-sessions.service... Jul 14 22:39:10.706593 systemd[1]: Finished systemd-user-sessions.service. Jul 14 22:39:10.711919 systemd[1]: Started getty@tty1.service. Jul 14 22:39:10.715062 systemd[1]: Started serial-getty@ttyS0.service. Jul 14 22:39:10.716217 systemd[1]: Reached target getty.target. Jul 14 22:39:10.814889 tar[1331]: linux-amd64/LICENSE Jul 14 22:39:10.815296 tar[1331]: linux-amd64/README.md Jul 14 22:39:10.819976 systemd[1]: Finished prepare-helm.service. Jul 14 22:39:11.530056 systemd[1]: Created slice system-sshd.slice. Jul 14 22:39:11.532549 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:39252.service. Jul 14 22:39:11.572128 sshd[1393]: Accepted publickey for core from 10.0.0.1 port 39252 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:39:11.574110 sshd[1393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:39:11.581871 systemd[1]: Created slice user-500.slice. Jul 14 22:39:11.583662 systemd[1]: Starting user-runtime-dir@500.service... Jul 14 22:39:11.587132 systemd-logind[1314]: New session 1 of user core. Jul 14 22:39:11.597596 systemd[1]: Finished user-runtime-dir@500.service. Jul 14 22:39:11.601795 systemd[1]: Starting user@500.service... Jul 14 22:39:11.606545 (systemd)[1398]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:39:11.637681 systemd[1]: Started kubelet.service. Jul 14 22:39:11.638842 systemd[1]: Reached target multi-user.target. Jul 14 22:39:11.640986 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 14 22:39:11.646668 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 14 22:39:11.646963 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 14 22:39:11.689384 systemd[1398]: Queued start job for default target default.target. Jul 14 22:39:11.689620 systemd[1398]: Reached target paths.target. Jul 14 22:39:11.689635 systemd[1398]: Reached target sockets.target. Jul 14 22:39:11.689646 systemd[1398]: Reached target timers.target. Jul 14 22:39:11.689656 systemd[1398]: Reached target basic.target. Jul 14 22:39:11.689785 systemd[1]: Started user@500.service. Jul 14 22:39:11.690244 systemd[1398]: Reached target default.target. Jul 14 22:39:11.690287 systemd[1398]: Startup finished in 72ms. Jul 14 22:39:11.691424 systemd[1]: Started session-1.scope. Jul 14 22:39:11.693866 systemd[1]: Startup finished in 26.732s (kernel) + 7.466s (userspace) = 34.199s. Jul 14 22:39:11.750471 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:39262.service. Jul 14 22:39:11.891280 sshd[1415]: Accepted publickey for core from 10.0.0.1 port 39262 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:39:11.892536 sshd[1415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:39:11.896558 systemd-logind[1314]: New session 2 of user core. Jul 14 22:39:11.897559 systemd[1]: Started session-2.scope. Jul 14 22:39:11.957239 sshd[1415]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:11.960295 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:39276.service. Jul 14 22:39:11.962120 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:39262.service: Deactivated successfully. Jul 14 22:39:11.963172 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 22:39:11.963716 systemd-logind[1314]: Session 2 logged out. Waiting for processes to exit. Jul 14 22:39:11.964579 systemd-logind[1314]: Removed session 2. Jul 14 22:39:11.995755 sshd[1425]: Accepted publickey for core from 10.0.0.1 port 39276 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:39:11.997179 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:39:12.001480 systemd-logind[1314]: New session 3 of user core. Jul 14 22:39:12.002237 systemd[1]: Started session-3.scope. Jul 14 22:39:12.055139 sshd[1425]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:12.057594 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:39278.service. Jul 14 22:39:12.058593 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:39276.service: Deactivated successfully. Jul 14 22:39:12.059316 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 22:39:12.060811 systemd-logind[1314]: Session 3 logged out. Waiting for processes to exit. Jul 14 22:39:12.061902 systemd-logind[1314]: Removed session 3. Jul 14 22:39:12.091423 sshd[1432]: Accepted publickey for core from 10.0.0.1 port 39278 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:39:12.122083 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:39:12.129197 systemd-logind[1314]: New session 4 of user core. Jul 14 22:39:12.129434 systemd[1]: Started session-4.scope. Jul 14 22:39:12.203104 sshd[1432]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:12.205484 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:39284.service. Jul 14 22:39:12.206221 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:39278.service: Deactivated successfully. Jul 14 22:39:12.207299 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:39:12.207411 systemd-logind[1314]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:39:12.208282 systemd-logind[1314]: Removed session 4. Jul 14 22:39:12.240691 sshd[1439]: Accepted publickey for core from 10.0.0.1 port 39284 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:39:12.241945 sshd[1439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:39:12.246482 systemd-logind[1314]: New session 5 of user core. Jul 14 22:39:12.246565 systemd[1]: Started session-5.scope. Jul 14 22:39:12.306499 sudo[1446]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:39:12.306758 sudo[1446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 22:39:12.349554 systemd[1]: Starting docker.service... Jul 14 22:39:12.455806 env[1458]: time="2025-07-14T22:39:12.455668226Z" level=info msg="Starting up" Jul 14 22:39:12.457330 env[1458]: time="2025-07-14T22:39:12.457298332Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 14 22:39:12.457330 env[1458]: time="2025-07-14T22:39:12.457326615Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 14 22:39:12.457431 env[1458]: time="2025-07-14T22:39:12.457369145Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 14 22:39:12.457431 env[1458]: time="2025-07-14T22:39:12.457383582Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 14 22:39:12.460281 env[1458]: time="2025-07-14T22:39:12.459752753Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 14 22:39:12.460281 env[1458]: time="2025-07-14T22:39:12.459778431Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 14 22:39:12.460281 env[1458]: time="2025-07-14T22:39:12.459798619Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 14 22:39:12.460281 env[1458]: time="2025-07-14T22:39:12.459807936Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 14 22:39:12.501647 kubelet[1409]: E0714 22:39:12.501584 1409 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:39:12.503495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:39:12.503674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:39:16.452063 env[1458]: time="2025-07-14T22:39:16.452015169Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 14 22:39:16.452063 env[1458]: time="2025-07-14T22:39:16.452046538Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 14 22:39:16.452573 env[1458]: time="2025-07-14T22:39:16.452203823Z" level=info msg="Loading containers: start." Jul 14 22:39:17.119297 kernel: Initializing XFRM netlink socket Jul 14 22:39:17.147751 env[1458]: time="2025-07-14T22:39:17.147701543Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 14 22:39:17.197960 systemd-networkd[1100]: docker0: Link UP Jul 14 22:39:17.260301 env[1458]: time="2025-07-14T22:39:17.260243298Z" level=info msg="Loading containers: done." Jul 14 22:39:17.288424 env[1458]: time="2025-07-14T22:39:17.288345370Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 22:39:17.288628 env[1458]: time="2025-07-14T22:39:17.288546627Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 14 22:39:17.288657 env[1458]: time="2025-07-14T22:39:17.288643198Z" level=info msg="Daemon has completed initialization" Jul 14 22:39:17.326861 systemd[1]: Started docker.service. Jul 14 22:39:17.331395 env[1458]: time="2025-07-14T22:39:17.331322727Z" level=info msg="API listen on /run/docker.sock" Jul 14 22:39:22.682616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 22:39:22.682819 systemd[1]: Stopped kubelet.service. Jul 14 22:39:22.684784 systemd[1]: Starting kubelet.service... Jul 14 22:39:22.910738 systemd[1]: Started kubelet.service. Jul 14 22:39:22.964073 kubelet[1595]: E0714 22:39:22.963815 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:39:22.967235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:39:22.967439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:39:27.689376 env[1334]: time="2025-07-14T22:39:27.689320533Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Jul 14 22:39:33.182650 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 22:39:33.182844 systemd[1]: Stopped kubelet.service. Jul 14 22:39:33.184143 systemd[1]: Starting kubelet.service... Jul 14 22:39:33.292777 systemd[1]: Started kubelet.service. Jul 14 22:39:33.329507 kubelet[1613]: E0714 22:39:33.329447 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:39:33.331021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:39:33.331156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:39:41.899697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3935497872.mount: Deactivated successfully. Jul 14 22:39:43.432738 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 22:39:43.432923 systemd[1]: Stopped kubelet.service. Jul 14 22:39:43.434594 systemd[1]: Starting kubelet.service... Jul 14 22:39:43.529681 systemd[1]: Started kubelet.service. Jul 14 22:39:44.155566 kubelet[1630]: E0714 22:39:44.155481 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:39:44.157397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:39:44.157554 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:39:46.898908 env[1334]: time="2025-07-14T22:39:46.898829900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:39:47.099006 env[1334]: time="2025-07-14T22:39:47.098959767Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:39:47.521029 env[1334]: time="2025-07-14T22:39:47.520982359Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:39:47.653281 env[1334]: time="2025-07-14T22:39:47.653214561Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:39:47.654008 env[1334]: time="2025-07-14T22:39:47.653982061Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Jul 14 22:39:47.654645 env[1334]: time="2025-07-14T22:39:47.654612187Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Jul 14 22:39:54.182687 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 22:39:54.182867 systemd[1]: Stopped kubelet.service. Jul 14 22:39:54.184787 systemd[1]: Starting kubelet.service... Jul 14 22:39:54.274905 systemd[1]: Started kubelet.service. Jul 14 22:39:54.533299 update_engine[1317]: I0714 22:39:54.533145 1317 update_attempter.cc:509] Updating boot flags... Jul 14 22:39:54.833350 kubelet[1645]: E0714 22:39:54.833214 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:39:54.835127 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:39:54.835278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:39:58.835214 env[1334]: time="2025-07-14T22:39:58.835144276Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:39:58.931761 env[1334]: time="2025-07-14T22:39:58.931689371Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:39:59.007393 env[1334]: time="2025-07-14T22:39:59.007333974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:39:59.079880 env[1334]: time="2025-07-14T22:39:59.079826296Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:39:59.080721 env[1334]: time="2025-07-14T22:39:59.080692566Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Jul 14 22:39:59.081246 env[1334]: time="2025-07-14T22:39:59.081210446Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Jul 14 22:40:04.932606 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 14 22:40:04.932783 systemd[1]: Stopped kubelet.service. Jul 14 22:40:04.934107 systemd[1]: Starting kubelet.service... Jul 14 22:40:05.025748 systemd[1]: Started kubelet.service. Jul 14 22:40:07.565404 kubelet[1677]: E0714 22:40:07.565335 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:40:07.567202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:40:07.567413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:40:07.941566 env[1334]: time="2025-07-14T22:40:07.941490708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:08.037210 env[1334]: time="2025-07-14T22:40:08.037121700Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:08.108586 env[1334]: time="2025-07-14T22:40:08.108521909Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:08.161002 env[1334]: time="2025-07-14T22:40:08.160965104Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:08.162377 env[1334]: time="2025-07-14T22:40:08.162323254Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Jul 14 22:40:08.162985 env[1334]: time="2025-07-14T22:40:08.162946860Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Jul 14 22:40:12.627886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360030760.mount: Deactivated successfully. Jul 14 22:40:13.967208 env[1334]: time="2025-07-14T22:40:13.967130466Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:14.009191 env[1334]: time="2025-07-14T22:40:14.009120686Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:14.013023 env[1334]: time="2025-07-14T22:40:14.012966593Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:14.016137 env[1334]: time="2025-07-14T22:40:14.016098847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:14.016550 env[1334]: time="2025-07-14T22:40:14.016517955Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Jul 14 22:40:14.017036 env[1334]: time="2025-07-14T22:40:14.017011834Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 22:40:14.685005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633464012.mount: Deactivated successfully. Jul 14 22:40:16.323362 env[1334]: time="2025-07-14T22:40:16.323284504Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:16.331786 env[1334]: time="2025-07-14T22:40:16.331715801Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:16.336282 env[1334]: time="2025-07-14T22:40:16.336231975Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:16.338746 env[1334]: time="2025-07-14T22:40:16.338677164Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:16.339609 env[1334]: time="2025-07-14T22:40:16.339567388Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 14 22:40:16.340229 env[1334]: time="2025-07-14T22:40:16.340200469Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 22:40:17.682840 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 14 22:40:17.683051 systemd[1]: Stopped kubelet.service. Jul 14 22:40:17.684541 systemd[1]: Starting kubelet.service... Jul 14 22:40:17.799845 systemd[1]: Started kubelet.service. Jul 14 22:40:18.136002 kubelet[1694]: E0714 22:40:18.135860 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:40:18.138339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:40:18.138490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:40:18.613070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2497279547.mount: Deactivated successfully. Jul 14 22:40:18.629558 env[1334]: time="2025-07-14T22:40:18.629399890Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:18.636166 env[1334]: time="2025-07-14T22:40:18.636104897Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:18.638974 env[1334]: time="2025-07-14T22:40:18.638932602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:18.643368 env[1334]: time="2025-07-14T22:40:18.643329389Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:18.644877 env[1334]: time="2025-07-14T22:40:18.644829970Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 22:40:18.645712 env[1334]: time="2025-07-14T22:40:18.645675099Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 22:40:23.080170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3577234892.mount: Deactivated successfully. Jul 14 22:40:26.665618 env[1334]: time="2025-07-14T22:40:26.665528052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:26.672187 env[1334]: time="2025-07-14T22:40:26.672101987Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:26.677962 env[1334]: time="2025-07-14T22:40:26.677891108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:26.689529 env[1334]: time="2025-07-14T22:40:26.689460384Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 14 22:40:26.691057 env[1334]: time="2025-07-14T22:40:26.691012569Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:28.182709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 14 22:40:28.182973 systemd[1]: Stopped kubelet.service. Jul 14 22:40:28.184801 systemd[1]: Starting kubelet.service... Jul 14 22:40:28.296064 systemd[1]: Started kubelet.service. Jul 14 22:40:28.350775 kubelet[1716]: E0714 22:40:28.350710 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:40:28.352674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:40:28.352860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:40:38.432757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jul 14 22:40:38.433043 systemd[1]: Stopped kubelet.service. Jul 14 22:40:38.434521 systemd[1]: Starting kubelet.service... Jul 14 22:40:38.565711 systemd[1]: Started kubelet.service. Jul 14 22:40:38.603422 kubelet[1748]: E0714 22:40:38.603361 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:40:38.605090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:40:38.605254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:40:38.716382 systemd[1]: Stopped kubelet.service. Jul 14 22:40:38.718543 systemd[1]: Starting kubelet.service... Jul 14 22:40:38.741138 systemd[1]: Reloading. Jul 14 22:40:38.815891 /usr/lib/systemd/system-generators/torcx-generator[1784]: time="2025-07-14T22:40:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 22:40:38.816282 /usr/lib/systemd/system-generators/torcx-generator[1784]: time="2025-07-14T22:40:38Z" level=info msg="torcx already run" Jul 14 22:40:41.289891 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 22:40:41.289907 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 22:40:41.308741 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:40:41.382493 systemd[1]: Started kubelet.service. Jul 14 22:40:41.384679 systemd[1]: Stopping kubelet.service... Jul 14 22:40:41.385193 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:40:41.385644 systemd[1]: Stopped kubelet.service. Jul 14 22:40:41.387501 systemd[1]: Starting kubelet.service... Jul 14 22:40:41.480352 systemd[1]: Started kubelet.service. Jul 14 22:40:41.512025 kubelet[1848]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:40:41.512025 kubelet[1848]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:40:41.512025 kubelet[1848]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:40:41.512534 kubelet[1848]: I0714 22:40:41.512081 1848 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:40:41.804510 kubelet[1848]: I0714 22:40:41.804452 1848 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:40:41.804510 kubelet[1848]: I0714 22:40:41.804492 1848 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:40:41.804824 kubelet[1848]: I0714 22:40:41.804801 1848 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:40:41.856511 kubelet[1848]: E0714 22:40:41.856457 1848 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:41.864450 kubelet[1848]: I0714 22:40:41.864412 1848 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:40:41.877468 kubelet[1848]: E0714 22:40:41.877447 1848 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:40:41.877468 kubelet[1848]: I0714 22:40:41.877468 1848 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:40:41.882060 kubelet[1848]: I0714 22:40:41.882039 1848 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:40:41.882408 kubelet[1848]: I0714 22:40:41.882391 1848 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:40:41.882597 kubelet[1848]: I0714 22:40:41.882567 1848 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:40:41.882938 kubelet[1848]: I0714 22:40:41.882596 1848 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 22:40:41.883049 kubelet[1848]: I0714 22:40:41.882957 1848 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:40:41.883049 kubelet[1848]: I0714 22:40:41.882967 1848 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:40:41.883113 kubelet[1848]: I0714 22:40:41.883097 1848 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:40:41.892240 kubelet[1848]: I0714 22:40:41.892206 1848 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:40:41.892333 kubelet[1848]: I0714 22:40:41.892255 1848 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:40:41.892333 kubelet[1848]: I0714 22:40:41.892316 1848 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:40:41.892408 kubelet[1848]: I0714 22:40:41.892341 1848 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:40:41.892442 kubelet[1848]: W0714 22:40:41.892383 1848 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:40:41.892488 kubelet[1848]: E0714 22:40:41.892461 1848 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:41.895150 kubelet[1848]: I0714 22:40:41.895118 1848 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 22:40:41.895546 kubelet[1848]: I0714 22:40:41.895525 1848 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:40:41.895619 kubelet[1848]: W0714 22:40:41.895583 1848 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:40:41.914081 kubelet[1848]: W0714 22:40:41.914033 1848 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:40:41.914147 kubelet[1848]: E0714 22:40:41.914086 1848 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:41.914964 kubelet[1848]: I0714 22:40:41.914928 1848 server.go:1274] "Started kubelet" Jul 14 22:40:41.915903 kubelet[1848]: I0714 22:40:41.915414 1848 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:40:41.915903 kubelet[1848]: I0714 22:40:41.915798 1848 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:40:41.915903 kubelet[1848]: I0714 22:40:41.915849 1848 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:40:41.916773 kubelet[1848]: I0714 22:40:41.916760 1848 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:40:41.919993 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 14 22:40:41.920120 kubelet[1848]: I0714 22:40:41.920099 1848 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:40:41.924608 kubelet[1848]: I0714 22:40:41.924584 1848 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:40:41.925767 kubelet[1848]: I0714 22:40:41.925744 1848 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:40:41.941105 kubelet[1848]: E0714 22:40:41.941058 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:41.943402 kubelet[1848]: I0714 22:40:41.943384 1848 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:40:41.943463 kubelet[1848]: I0714 22:40:41.943439 1848 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:40:41.951065 kubelet[1848]: E0714 22:40:41.949195 1848 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="200ms" Jul 14 22:40:41.951065 kubelet[1848]: I0714 22:40:41.949534 1848 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:40:41.951065 kubelet[1848]: I0714 22:40:41.949630 1848 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:40:41.951065 kubelet[1848]: W0714 22:40:41.949972 1848 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:40:41.951065 kubelet[1848]: E0714 22:40:41.950007 1848 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:41.951065 kubelet[1848]: E0714 22:40:41.950825 1848 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:40:41.951065 kubelet[1848]: I0714 22:40:41.950937 1848 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:40:41.952358 kubelet[1848]: I0714 22:40:41.952252 1848 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:40:41.953291 kubelet[1848]: I0714 22:40:41.953250 1848 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:40:41.953347 kubelet[1848]: I0714 22:40:41.953294 1848 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:40:41.953347 kubelet[1848]: I0714 22:40:41.953311 1848 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:40:41.953420 kubelet[1848]: E0714 22:40:41.953347 1848 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:40:41.953651 kubelet[1848]: E0714 22:40:41.952723 1848 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523f559b8b22cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:40:41.914901195 +0000 UTC m=+0.430909001,LastTimestamp:2025-07-14 22:40:41.914901195 +0000 UTC m=+0.430909001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:40:41.956668 kubelet[1848]: W0714 22:40:41.956618 1848 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:40:41.956668 kubelet[1848]: E0714 22:40:41.956660 1848 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:41.968507 kubelet[1848]: I0714 22:40:41.968479 1848 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:40:41.968507 kubelet[1848]: I0714 22:40:41.968499 1848 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:40:41.968668 kubelet[1848]: I0714 22:40:41.968523 1848 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:40:42.041714 kubelet[1848]: E0714 22:40:42.041675 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:42.054162 kubelet[1848]: E0714 22:40:42.054120 1848 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:40:42.142590 kubelet[1848]: E0714 22:40:42.142428 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:42.150677 kubelet[1848]: E0714 22:40:42.150609 1848 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="400ms" Jul 14 22:40:42.242852 kubelet[1848]: E0714 22:40:42.242741 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:42.255331 kubelet[1848]: E0714 22:40:42.255223 1848 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:40:42.343710 kubelet[1848]: E0714 22:40:42.343597 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:42.444125 kubelet[1848]: E0714 22:40:42.443946 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:42.544477 kubelet[1848]: E0714 22:40:42.544379 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:42.552374 kubelet[1848]: E0714 22:40:42.552304 1848 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="800ms" Jul 14 22:40:42.645596 kubelet[1848]: E0714 22:40:42.645505 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:42.655862 kubelet[1848]: E0714 22:40:42.655805 1848 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:40:42.746420 kubelet[1848]: E0714 22:40:42.746278 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:42.770137 kubelet[1848]: W0714 22:40:42.770059 1848 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:40:42.770253 kubelet[1848]: E0714 22:40:42.770144 1848 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:42.846841 kubelet[1848]: E0714 22:40:42.846785 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:42.867919 kubelet[1848]: W0714 22:40:42.867863 1848 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:40:42.867968 kubelet[1848]: E0714 22:40:42.867923 1848 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:42.887563 kubelet[1848]: W0714 22:40:42.887512 1848 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:40:42.887634 kubelet[1848]: E0714 22:40:42.887563 1848 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:42.947911 kubelet[1848]: E0714 22:40:42.947872 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:43.034035 kubelet[1848]: W0714 22:40:43.033877 1848 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:40:43.034035 kubelet[1848]: E0714 22:40:43.033966 1848 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:43.048491 kubelet[1848]: E0714 22:40:43.048456 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:43.149012 kubelet[1848]: E0714 22:40:43.148949 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:43.249515 kubelet[1848]: E0714 22:40:43.249464 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:43.350131 kubelet[1848]: E0714 22:40:43.350009 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:43.353645 kubelet[1848]: E0714 22:40:43.353596 1848 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="1.6s" Jul 14 22:40:43.450714 kubelet[1848]: E0714 22:40:43.450639 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:43.456889 kubelet[1848]: E0714 22:40:43.456829 1848 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:40:43.551543 kubelet[1848]: E0714 22:40:43.551456 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:43.652081 kubelet[1848]: E0714 22:40:43.651950 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:43.752381 kubelet[1848]: E0714 22:40:43.752327 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:43.852906 kubelet[1848]: E0714 22:40:43.852839 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:43.945475 kubelet[1848]: I0714 22:40:43.945422 1848 policy_none.go:49] "None policy: Start" Jul 14 22:40:43.946288 kubelet[1848]: I0714 22:40:43.946240 1848 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:40:43.946346 kubelet[1848]: I0714 22:40:43.946292 1848 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:40:43.953431 kubelet[1848]: E0714 22:40:43.953402 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:43.992413 kubelet[1848]: E0714 22:40:43.992370 1848 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:44.053970 kubelet[1848]: E0714 22:40:44.053931 1848 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:40:44.055944 kubelet[1848]: I0714 22:40:44.055920 1848 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:40:44.056081 kubelet[1848]: I0714 22:40:44.056064 1848 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:40:44.056136 kubelet[1848]: I0714 22:40:44.056082 1848 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:40:44.056353 kubelet[1848]: I0714 22:40:44.056328 1848 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:40:44.057611 kubelet[1848]: E0714 22:40:44.057597 1848 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 22:40:44.158402 kubelet[1848]: I0714 22:40:44.158361 1848 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:40:44.158923 kubelet[1848]: E0714 22:40:44.158871 1848 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 22:40:44.361135 kubelet[1848]: I0714 22:40:44.361021 1848 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:40:44.361421 kubelet[1848]: E0714 22:40:44.361383 1848 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 22:40:44.762852 kubelet[1848]: I0714 22:40:44.762809 1848 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:40:44.763235 kubelet[1848]: E0714 22:40:44.763148 1848 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 22:40:44.765583 kubelet[1848]: W0714 22:40:44.765551 1848 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:40:44.765693 kubelet[1848]: E0714 22:40:44.765588 1848 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:44.809415 kubelet[1848]: W0714 22:40:44.809357 1848 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:40:44.809415 kubelet[1848]: E0714 22:40:44.809402 1848 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:44.954614 kubelet[1848]: E0714 22:40:44.954548 1848 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="3.2s" Jul 14 22:40:45.011345 kubelet[1848]: W0714 22:40:45.011291 1848 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:40:45.011345 kubelet[1848]: E0714 22:40:45.011348 1848 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:45.148788 kubelet[1848]: W0714 22:40:45.148675 1848 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 14 22:40:45.148788 kubelet[1848]: E0714 22:40:45.148738 1848 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:40:45.158132 kubelet[1848]: I0714 22:40:45.158097 1848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca9af87f6c961abe6cdbd38ff1cd5372-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ca9af87f6c961abe6cdbd38ff1cd5372\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:40:45.158132 kubelet[1848]: I0714 22:40:45.158128 1848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:40:45.158238 kubelet[1848]: I0714 22:40:45.158151 1848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:40:45.158238 kubelet[1848]: I0714 22:40:45.158172 1848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:40:45.158238 kubelet[1848]: I0714 22:40:45.158189 1848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca9af87f6c961abe6cdbd38ff1cd5372-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ca9af87f6c961abe6cdbd38ff1cd5372\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:40:45.158352 kubelet[1848]: I0714 22:40:45.158237 1848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca9af87f6c961abe6cdbd38ff1cd5372-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ca9af87f6c961abe6cdbd38ff1cd5372\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:40:45.158352 kubelet[1848]: I0714 22:40:45.158254 1848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:40:45.158352 kubelet[1848]: I0714 22:40:45.158292 1848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:40:45.158352 kubelet[1848]: I0714 22:40:45.158314 1848 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:40:45.362406 kubelet[1848]: E0714 22:40:45.362364 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:45.362600 kubelet[1848]: E0714 22:40:45.362365 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:45.363002 kubelet[1848]: E0714 22:40:45.362951 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:45.363390 env[1334]: time="2025-07-14T22:40:45.363070175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" Jul 14 22:40:45.363390 env[1334]: time="2025-07-14T22:40:45.363138853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" Jul 14 22:40:45.363390 env[1334]: time="2025-07-14T22:40:45.363316397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ca9af87f6c961abe6cdbd38ff1cd5372,Namespace:kube-system,Attempt:0,}" Jul 14 22:40:45.564389 kubelet[1848]: I0714 22:40:45.564344 1848 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:40:45.564714 kubelet[1848]: E0714 22:40:45.564685 1848 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 22:40:45.675798 kubelet[1848]: E0714 22:40:45.675682 1848 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523f559b8b22cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:40:41.914901195 +0000 UTC m=+0.430909001,LastTimestamp:2025-07-14 22:40:41.914901195 +0000 UTC m=+0.430909001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:40:46.891169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398797695.mount: Deactivated successfully. Jul 14 22:40:46.902854 env[1334]: time="2025-07-14T22:40:46.902797746Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:46.906197 env[1334]: time="2025-07-14T22:40:46.906150226Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:46.907970 env[1334]: time="2025-07-14T22:40:46.907930965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:46.910119 env[1334]: time="2025-07-14T22:40:46.910061322Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:46.985999 env[1334]: time="2025-07-14T22:40:46.985938855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:46.995564 env[1334]: time="2025-07-14T22:40:46.995505775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:46.998675 env[1334]: time="2025-07-14T22:40:46.998628440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:47.001926 env[1334]: time="2025-07-14T22:40:47.001891665Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:47.004576 env[1334]: time="2025-07-14T22:40:47.004541476Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:47.006356 env[1334]: time="2025-07-14T22:40:47.006301831Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:47.013855 env[1334]: time="2025-07-14T22:40:47.013809177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:47.019819 env[1334]: time="2025-07-14T22:40:47.019779401Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:40:47.068116 env[1334]: time="2025-07-14T22:40:47.068039337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:40:47.068116 env[1334]: time="2025-07-14T22:40:47.068088702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:40:47.068116 env[1334]: time="2025-07-14T22:40:47.068102078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:40:47.068352 env[1334]: time="2025-07-14T22:40:47.068239545Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/911714f0603933fad3e3d304223ee562e6f4a092b03a98428991d3b5d8afe56d pid=1889 runtime=io.containerd.runc.v2 Jul 14 22:40:47.091306 env[1334]: time="2025-07-14T22:40:47.088343182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:40:47.091306 env[1334]: time="2025-07-14T22:40:47.088421834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:40:47.091306 env[1334]: time="2025-07-14T22:40:47.088442364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:40:47.091306 env[1334]: time="2025-07-14T22:40:47.088593337Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22289d0802edc7d954d54a2734ba6a8a9934f192e749abd02f92e82a02408106 pid=1922 runtime=io.containerd.runc.v2 Jul 14 22:40:47.092161 env[1334]: time="2025-07-14T22:40:47.092089370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:40:47.092242 env[1334]: time="2025-07-14T22:40:47.092138846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:40:47.092242 env[1334]: time="2025-07-14T22:40:47.092151200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:40:47.092369 env[1334]: time="2025-07-14T22:40:47.092256034Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a569284ffb09116704556e7141e6a4e26ef15e84b37ff0184f763bdd575971d pid=1923 runtime=io.containerd.runc.v2 Jul 14 22:40:47.127581 env[1334]: time="2025-07-14T22:40:47.127520145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"911714f0603933fad3e3d304223ee562e6f4a092b03a98428991d3b5d8afe56d\"" Jul 14 22:40:47.128439 kubelet[1848]: E0714 22:40:47.128407 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:47.129999 env[1334]: time="2025-07-14T22:40:47.129972803Z" level=info msg="CreateContainer within sandbox \"911714f0603933fad3e3d304223ee562e6f4a092b03a98428991d3b5d8afe56d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 22:40:47.147074 env[1334]: time="2025-07-14T22:40:47.146973849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ca9af87f6c961abe6cdbd38ff1cd5372,Namespace:kube-system,Attempt:0,} returns sandbox id \"22289d0802edc7d954d54a2734ba6a8a9934f192e749abd02f92e82a02408106\"" Jul 14 22:40:47.148090 kubelet[1848]: E0714 22:40:47.148069 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:47.148836 env[1334]: time="2025-07-14T22:40:47.148787618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a569284ffb09116704556e7141e6a4e26ef15e84b37ff0184f763bdd575971d\"" Jul 14 22:40:47.149540 kubelet[1848]: E0714 22:40:47.149517 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:47.150279 env[1334]: time="2025-07-14T22:40:47.150225749Z" level=info msg="CreateContainer within sandbox \"22289d0802edc7d954d54a2734ba6a8a9934f192e749abd02f92e82a02408106\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 22:40:47.151231 env[1334]: time="2025-07-14T22:40:47.151188406Z" level=info msg="CreateContainer within sandbox \"2a569284ffb09116704556e7141e6a4e26ef15e84b37ff0184f763bdd575971d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 22:40:47.166691 kubelet[1848]: I0714 22:40:47.166657 1848 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:40:47.167109 kubelet[1848]: E0714 22:40:47.167066 1848 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 22:40:47.179997 env[1334]: time="2025-07-14T22:40:47.179939311Z" level=info msg="CreateContainer within sandbox \"911714f0603933fad3e3d304223ee562e6f4a092b03a98428991d3b5d8afe56d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9cb97059b25c24554d462ac7c6ba08627c9f360a7f61c7d832d7c54d14f5fd5d\"" Jul 14 22:40:47.181077 env[1334]: time="2025-07-14T22:40:47.181038152Z" level=info msg="StartContainer for \"9cb97059b25c24554d462ac7c6ba08627c9f360a7f61c7d832d7c54d14f5fd5d\"" Jul 14 22:40:47.206717 env[1334]: time="2025-07-14T22:40:47.206632822Z" level=info msg="CreateContainer within sandbox \"22289d0802edc7d954d54a2734ba6a8a9934f192e749abd02f92e82a02408106\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1f24d6517d11ae9280927ca82746d00198054467da99d72631175cf8ec33e271\"" Jul 14 22:40:47.207613 env[1334]: time="2025-07-14T22:40:47.207571263Z" level=info msg="StartContainer for \"1f24d6517d11ae9280927ca82746d00198054467da99d72631175cf8ec33e271\"" Jul 14 22:40:47.213623 env[1334]: time="2025-07-14T22:40:47.213567297Z" level=info msg="CreateContainer within sandbox \"2a569284ffb09116704556e7141e6a4e26ef15e84b37ff0184f763bdd575971d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"309f4dfeec0b2480000bee51cb3dae39d7d80b11bfd62218f36af1daba194fc5\"" Jul 14 22:40:47.214595 env[1334]: time="2025-07-14T22:40:47.214533061Z" level=info msg="StartContainer for \"309f4dfeec0b2480000bee51cb3dae39d7d80b11bfd62218f36af1daba194fc5\"" Jul 14 22:40:47.245513 env[1334]: time="2025-07-14T22:40:47.245442823Z" level=info msg="StartContainer for \"9cb97059b25c24554d462ac7c6ba08627c9f360a7f61c7d832d7c54d14f5fd5d\" returns successfully" Jul 14 22:40:47.278586 env[1334]: time="2025-07-14T22:40:47.278480234Z" level=info msg="StartContainer for \"1f24d6517d11ae9280927ca82746d00198054467da99d72631175cf8ec33e271\" returns successfully" Jul 14 22:40:47.302321 env[1334]: time="2025-07-14T22:40:47.302248551Z" level=info msg="StartContainer for \"309f4dfeec0b2480000bee51cb3dae39d7d80b11bfd62218f36af1daba194fc5\" returns successfully" Jul 14 22:40:47.967603 kubelet[1848]: E0714 22:40:47.967503 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:47.969149 kubelet[1848]: E0714 22:40:47.969107 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:47.970551 kubelet[1848]: E0714 22:40:47.970520 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:48.393864 kubelet[1848]: E0714 22:40:48.393730 1848 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 22:40:48.736655 kubelet[1848]: E0714 22:40:48.736614 1848 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 14 22:40:48.972139 kubelet[1848]: E0714 22:40:48.972109 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:48.972324 kubelet[1848]: E0714 22:40:48.972113 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:49.090237 kubelet[1848]: E0714 22:40:49.090101 1848 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 14 22:40:49.536496 kubelet[1848]: E0714 22:40:49.536460 1848 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 14 22:40:49.896817 kubelet[1848]: I0714 22:40:49.896686 1848 apiserver.go:52] "Watching apiserver" Jul 14 22:40:49.944609 kubelet[1848]: I0714 22:40:49.944537 1848 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:40:50.368623 kubelet[1848]: I0714 22:40:50.368574 1848 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:40:50.419845 kubelet[1848]: I0714 22:40:50.419801 1848 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:40:51.039958 kubelet[1848]: E0714 22:40:51.039913 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:51.975777 kubelet[1848]: E0714 22:40:51.975722 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:55.332058 kubelet[1848]: E0714 22:40:55.332018 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:55.332539 kubelet[1848]: I0714 22:40:55.332062 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.332032654 podStartE2EDuration="5.332032654s" podCreationTimestamp="2025-07-14 22:40:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:40:52.257850857 +0000 UTC m=+10.773858663" watchObservedRunningTime="2025-07-14 22:40:55.332032654 +0000 UTC m=+13.848040460" Jul 14 22:40:55.982874 kubelet[1848]: E0714 22:40:55.982845 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:58.312529 kubelet[1848]: E0714 22:40:58.312493 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:58.807592 kubelet[1848]: I0714 22:40:58.807527 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.807501895 podStartE2EDuration="3.807501895s" podCreationTimestamp="2025-07-14 22:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:40:58.406023884 +0000 UTC m=+16.922031721" watchObservedRunningTime="2025-07-14 22:40:58.807501895 +0000 UTC m=+17.323509701" Jul 14 22:40:58.988344 kubelet[1848]: E0714 22:40:58.988314 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:59.317707 systemd[1]: Reloading. Jul 14 22:40:59.383691 /usr/lib/systemd/system-generators/torcx-generator[2149]: time="2025-07-14T22:40:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 22:40:59.383722 /usr/lib/systemd/system-generators/torcx-generator[2149]: time="2025-07-14T22:40:59Z" level=info msg="torcx already run" Jul 14 22:41:00.590228 kubelet[1848]: E0714 22:41:00.590192 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:00.991345 kubelet[1848]: E0714 22:41:00.991314 1848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:01.221395 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 22:41:01.221417 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 22:41:01.227138 kubelet[1848]: I0714 22:41:01.226945 1848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.226925137 podStartE2EDuration="3.226925137s" podCreationTimestamp="2025-07-14 22:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:40:58.809036626 +0000 UTC m=+17.325044432" watchObservedRunningTime="2025-07-14 22:41:01.226925137 +0000 UTC m=+19.742932943" Jul 14 22:41:01.243642 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:41:01.325956 systemd[1]: Stopping kubelet.service... Jul 14 22:41:01.351738 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:41:01.352042 systemd[1]: Stopped kubelet.service. Jul 14 22:41:01.353864 systemd[1]: Starting kubelet.service... Jul 14 22:41:01.446610 systemd[1]: Started kubelet.service. Jul 14 22:41:01.478005 kubelet[2204]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:41:01.478005 kubelet[2204]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:41:01.478005 kubelet[2204]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:41:01.478407 kubelet[2204]: I0714 22:41:01.478046 2204 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:41:01.484994 kubelet[2204]: I0714 22:41:01.484949 2204 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:41:01.484994 kubelet[2204]: I0714 22:41:01.484979 2204 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:41:01.485228 kubelet[2204]: I0714 22:41:01.485206 2204 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:41:01.486383 kubelet[2204]: I0714 22:41:01.486362 2204 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 22:41:01.487920 kubelet[2204]: I0714 22:41:01.487899 2204 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:41:01.490478 kubelet[2204]: E0714 22:41:01.490438 2204 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:41:01.490583 kubelet[2204]: I0714 22:41:01.490558 2204 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:41:01.495007 kubelet[2204]: I0714 22:41:01.494891 2204 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:41:01.495358 kubelet[2204]: I0714 22:41:01.495342 2204 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:41:01.495573 kubelet[2204]: I0714 22:41:01.495467 2204 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:41:01.495699 kubelet[2204]: I0714 22:41:01.495502 2204 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 22:41:01.495822 kubelet[2204]: I0714 22:41:01.495712 2204 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:41:01.495822 kubelet[2204]: I0714 22:41:01.495724 2204 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:41:01.495822 kubelet[2204]: I0714 22:41:01.495755 2204 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:41:01.495911 kubelet[2204]: I0714 22:41:01.495864 2204 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:41:01.495911 kubelet[2204]: I0714 22:41:01.495877 2204 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:41:01.495911 kubelet[2204]: I0714 22:41:01.495902 2204 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:41:01.495911 kubelet[2204]: I0714 22:41:01.495911 2204 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:41:01.498567 kubelet[2204]: I0714 22:41:01.498510 2204 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 22:41:01.498961 kubelet[2204]: I0714 22:41:01.498931 2204 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:41:01.499445 kubelet[2204]: I0714 22:41:01.499410 2204 server.go:1274] "Started kubelet" Jul 14 22:41:01.502705 kubelet[2204]: I0714 22:41:01.501197 2204 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:41:01.502705 kubelet[2204]: I0714 22:41:01.501513 2204 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:41:01.502705 kubelet[2204]: I0714 22:41:01.501603 2204 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:41:01.508951 kubelet[2204]: I0714 22:41:01.508915 2204 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:41:01.510969 kubelet[2204]: I0714 22:41:01.510929 2204 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:41:01.511873 kubelet[2204]: I0714 22:41:01.511849 2204 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:41:01.513706 kubelet[2204]: I0714 22:41:01.513640 2204 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:41:01.513760 kubelet[2204]: I0714 22:41:01.513715 2204 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:41:01.513881 kubelet[2204]: I0714 22:41:01.513856 2204 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:41:01.515758 kubelet[2204]: I0714 22:41:01.515730 2204 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:41:01.515835 kubelet[2204]: I0714 22:41:01.515818 2204 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:41:01.517427 kubelet[2204]: I0714 22:41:01.517382 2204 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:41:01.517567 kubelet[2204]: E0714 22:41:01.517533 2204 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:41:01.526493 kubelet[2204]: I0714 22:41:01.526111 2204 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:41:01.528552 kubelet[2204]: I0714 22:41:01.528500 2204 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:41:01.528623 kubelet[2204]: I0714 22:41:01.528556 2204 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:41:01.528623 kubelet[2204]: I0714 22:41:01.528579 2204 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:41:01.528700 kubelet[2204]: E0714 22:41:01.528625 2204 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:41:01.561053 kubelet[2204]: I0714 22:41:01.561014 2204 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:41:01.561053 kubelet[2204]: I0714 22:41:01.561039 2204 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:41:01.561053 kubelet[2204]: I0714 22:41:01.561059 2204 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:41:01.561257 kubelet[2204]: I0714 22:41:01.561229 2204 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 22:41:01.561257 kubelet[2204]: I0714 22:41:01.561241 2204 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 22:41:01.561357 kubelet[2204]: I0714 22:41:01.561281 2204 policy_none.go:49] "None policy: Start" Jul 14 22:41:01.561757 kubelet[2204]: I0714 22:41:01.561726 2204 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:41:01.561757 kubelet[2204]: I0714 22:41:01.561752 2204 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:41:01.561968 kubelet[2204]: I0714 22:41:01.561912 2204 state_mem.go:75] "Updated machine memory state" Jul 14 22:41:01.563215 kubelet[2204]: I0714 22:41:01.563189 2204 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:41:01.563490 kubelet[2204]: I0714 22:41:01.563396 2204 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:41:01.563490 kubelet[2204]: I0714 22:41:01.563411 2204 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:41:01.563691 kubelet[2204]: I0714 22:41:01.563670 2204 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:41:01.680018 kubelet[2204]: I0714 22:41:01.679978 2204 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:41:01.774318 kubelet[2204]: E0714 22:41:01.774171 2204 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:41:01.774583 kubelet[2204]: E0714 22:41:01.774564 2204 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 22:41:01.775404 kubelet[2204]: E0714 22:41:01.775377 2204 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:01.808065 kubelet[2204]: I0714 22:41:01.808011 2204 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 22:41:01.808227 kubelet[2204]: I0714 22:41:01.808123 2204 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:41:01.815489 kubelet[2204]: I0714 22:41:01.815451 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:01.815687 kubelet[2204]: I0714 22:41:01.815496 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:01.815687 kubelet[2204]: I0714 22:41:01.815524 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca9af87f6c961abe6cdbd38ff1cd5372-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ca9af87f6c961abe6cdbd38ff1cd5372\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:41:01.815687 kubelet[2204]: I0714 22:41:01.815542 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca9af87f6c961abe6cdbd38ff1cd5372-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ca9af87f6c961abe6cdbd38ff1cd5372\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:41:01.815687 kubelet[2204]: I0714 22:41:01.815566 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:01.815687 kubelet[2204]: I0714 22:41:01.815583 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:01.815824 kubelet[2204]: I0714 22:41:01.815600 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:41:01.815824 kubelet[2204]: I0714 22:41:01.815615 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:41:01.815824 kubelet[2204]: I0714 22:41:01.815632 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca9af87f6c961abe6cdbd38ff1cd5372-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ca9af87f6c961abe6cdbd38ff1cd5372\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:41:02.075193 kubelet[2204]: E0714 22:41:02.075076 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:02.075326 kubelet[2204]: E0714 22:41:02.075213 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:02.076104 kubelet[2204]: E0714 22:41:02.076077 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:02.496891 kubelet[2204]: I0714 22:41:02.496840 2204 apiserver.go:52] "Watching apiserver" Jul 14 22:41:02.514220 kubelet[2204]: I0714 22:41:02.514146 2204 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:41:02.534763 kubelet[2204]: E0714 22:41:02.534724 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:02.535253 kubelet[2204]: E0714 22:41:02.535230 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:03.553601 kubelet[2204]: E0714 22:41:03.553471 2204 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:41:03.554884 kubelet[2204]: E0714 22:41:03.553670 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:04.473818 sudo[2239]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 14 22:41:04.474088 sudo[2239]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 14 22:41:04.537561 kubelet[2204]: E0714 22:41:04.537526 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:04.962170 sudo[2239]: pam_unix(sudo:session): session closed for user root Jul 14 22:41:06.333682 kubelet[2204]: E0714 22:41:06.333615 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:06.646095 kubelet[2204]: E0714 22:41:06.645960 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:07.379247 kubelet[2204]: E0714 22:41:07.379201 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:07.540844 kubelet[2204]: E0714 22:41:07.540798 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:07.553377 kubelet[2204]: E0714 22:41:07.553352 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:08.542597 kubelet[2204]: E0714 22:41:08.542563 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:10.864679 kubelet[2204]: I0714 22:41:10.864647 2204 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 22:41:10.865108 kubelet[2204]: I0714 22:41:10.865090 2204 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 22:41:10.865168 env[1334]: time="2025-07-14T22:41:10.864934863Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:41:12.483598 kubelet[2204]: I0714 22:41:12.483535 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c4dac3f3-2252-4c30-9811-29c9b2cb62c9-kube-proxy\") pod \"kube-proxy-28c5x\" (UID: \"c4dac3f3-2252-4c30-9811-29c9b2cb62c9\") " pod="kube-system/kube-proxy-28c5x" Jul 14 22:41:12.483598 kubelet[2204]: I0714 22:41:12.483597 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-cilium-run\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484152 kubelet[2204]: I0714 22:41:12.483628 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-etc-cni-netd\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484152 kubelet[2204]: I0714 22:41:12.483651 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ea3f25f-b892-432c-a399-1228ee2630d2-cilium-config-path\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484152 kubelet[2204]: I0714 22:41:12.483721 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-cilium-cgroup\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484152 kubelet[2204]: I0714 22:41:12.483743 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-xtables-lock\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484152 kubelet[2204]: I0714 22:41:12.483765 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6rbp\" (UniqueName: \"kubernetes.io/projected/1ea3f25f-b892-432c-a399-1228ee2630d2-kube-api-access-m6rbp\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484152 kubelet[2204]: I0714 22:41:12.483799 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-cni-path\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484344 kubelet[2204]: I0714 22:41:12.483832 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4dac3f3-2252-4c30-9811-29c9b2cb62c9-lib-modules\") pod \"kube-proxy-28c5x\" (UID: \"c4dac3f3-2252-4c30-9811-29c9b2cb62c9\") " pod="kube-system/kube-proxy-28c5x" Jul 14 22:41:12.484344 kubelet[2204]: I0714 22:41:12.483848 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-hostproc\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484344 kubelet[2204]: I0714 22:41:12.483864 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ea3f25f-b892-432c-a399-1228ee2630d2-clustermesh-secrets\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484344 kubelet[2204]: I0714 22:41:12.483981 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4dac3f3-2252-4c30-9811-29c9b2cb62c9-xtables-lock\") pod \"kube-proxy-28c5x\" (UID: \"c4dac3f3-2252-4c30-9811-29c9b2cb62c9\") " pod="kube-system/kube-proxy-28c5x" Jul 14 22:41:12.484344 kubelet[2204]: I0714 22:41:12.484049 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-bpf-maps\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484344 kubelet[2204]: I0714 22:41:12.484073 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-lib-modules\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484516 kubelet[2204]: I0714 22:41:12.484099 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-host-proc-sys-kernel\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484516 kubelet[2204]: I0714 22:41:12.484124 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-host-proc-sys-net\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484516 kubelet[2204]: I0714 22:41:12.484148 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ea3f25f-b892-432c-a399-1228ee2630d2-hubble-tls\") pod \"cilium-6hd6z\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " pod="kube-system/cilium-6hd6z" Jul 14 22:41:12.484516 kubelet[2204]: I0714 22:41:12.484172 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z9vw\" (UniqueName: \"kubernetes.io/projected/c4dac3f3-2252-4c30-9811-29c9b2cb62c9-kube-api-access-5z9vw\") pod \"kube-proxy-28c5x\" (UID: \"c4dac3f3-2252-4c30-9811-29c9b2cb62c9\") " pod="kube-system/kube-proxy-28c5x" Jul 14 22:41:12.585226 kubelet[2204]: I0714 22:41:12.585170 2204 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 14 22:41:12.786406 kubelet[2204]: I0714 22:41:12.786122 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae24e431-76f5-47d8-a8b3-5db74da44c76-cilium-config-path\") pod \"cilium-operator-5d85765b45-d4nbf\" (UID: \"ae24e431-76f5-47d8-a8b3-5db74da44c76\") " pod="kube-system/cilium-operator-5d85765b45-d4nbf" Jul 14 22:41:12.786406 kubelet[2204]: I0714 22:41:12.786173 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pxvl\" (UniqueName: \"kubernetes.io/projected/ae24e431-76f5-47d8-a8b3-5db74da44c76-kube-api-access-9pxvl\") pod \"cilium-operator-5d85765b45-d4nbf\" (UID: \"ae24e431-76f5-47d8-a8b3-5db74da44c76\") " pod="kube-system/cilium-operator-5d85765b45-d4nbf" Jul 14 22:41:13.000593 kubelet[2204]: E0714 22:41:13.000530 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:13.001235 env[1334]: time="2025-07-14T22:41:13.001196220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-28c5x,Uid:c4dac3f3-2252-4c30-9811-29c9b2cb62c9,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:13.002440 kubelet[2204]: E0714 22:41:13.002418 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:13.002791 env[1334]: time="2025-07-14T22:41:13.002747800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6hd6z,Uid:1ea3f25f-b892-432c-a399-1228ee2630d2,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:13.326126 kubelet[2204]: E0714 22:41:13.326033 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:13.326617 env[1334]: time="2025-07-14T22:41:13.326573718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d4nbf,Uid:ae24e431-76f5-47d8-a8b3-5db74da44c76,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:14.304229 env[1334]: time="2025-07-14T22:41:14.304133153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:41:14.304229 env[1334]: time="2025-07-14T22:41:14.304180754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:41:14.304229 env[1334]: time="2025-07-14T22:41:14.304195061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:41:14.304765 env[1334]: time="2025-07-14T22:41:14.304602197Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/348ea0c870f46540e3d122bd6f2e300c6d87247d52a08f2d73f4f75764ed37d8 pid=2276 runtime=io.containerd.runc.v2 Jul 14 22:41:14.320677 systemd[1]: run-containerd-runc-k8s.io-348ea0c870f46540e3d122bd6f2e300c6d87247d52a08f2d73f4f75764ed37d8-runc.KycF5f.mount: Deactivated successfully. Jul 14 22:41:14.341172 env[1334]: time="2025-07-14T22:41:14.341118777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-28c5x,Uid:c4dac3f3-2252-4c30-9811-29c9b2cb62c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"348ea0c870f46540e3d122bd6f2e300c6d87247d52a08f2d73f4f75764ed37d8\"" Jul 14 22:41:14.341914 kubelet[2204]: E0714 22:41:14.341866 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:14.344562 env[1334]: time="2025-07-14T22:41:14.344483753Z" level=info msg="CreateContainer within sandbox \"348ea0c870f46540e3d122bd6f2e300c6d87247d52a08f2d73f4f75764ed37d8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:41:15.933588 env[1334]: time="2025-07-14T22:41:15.933513714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:41:15.933588 env[1334]: time="2025-07-14T22:41:15.933561254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:41:15.933588 env[1334]: time="2025-07-14T22:41:15.933575271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:41:15.934091 env[1334]: time="2025-07-14T22:41:15.933748281Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a pid=2318 runtime=io.containerd.runc.v2 Jul 14 22:41:15.964997 env[1334]: time="2025-07-14T22:41:15.964931496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6hd6z,Uid:1ea3f25f-b892-432c-a399-1228ee2630d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a\"" Jul 14 22:41:15.965663 kubelet[2204]: E0714 22:41:15.965633 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:15.967455 env[1334]: time="2025-07-14T22:41:15.967420932Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 22:41:16.351236 env[1334]: time="2025-07-14T22:41:16.351083735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:41:16.351236 env[1334]: time="2025-07-14T22:41:16.351128821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:41:16.351488 env[1334]: time="2025-07-14T22:41:16.351434593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:41:16.351748 env[1334]: time="2025-07-14T22:41:16.351716220Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a238712683410753122aa96714c59362aa58823c4eac66655fed8f4ae856226 pid=2359 runtime=io.containerd.runc.v2 Jul 14 22:41:16.423889 env[1334]: time="2025-07-14T22:41:16.423834094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d4nbf,Uid:ae24e431-76f5-47d8-a8b3-5db74da44c76,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a238712683410753122aa96714c59362aa58823c4eac66655fed8f4ae856226\"" Jul 14 22:41:16.424557 kubelet[2204]: E0714 22:41:16.424534 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:16.986628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1317247957.mount: Deactivated successfully. Jul 14 22:41:17.314874 env[1334]: time="2025-07-14T22:41:17.314694448Z" level=info msg="CreateContainer within sandbox \"348ea0c870f46540e3d122bd6f2e300c6d87247d52a08f2d73f4f75764ed37d8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dfe49d7b785777a40a58c4892cc7046ae843d48b1cf7863ed0a4ee9bbc863efb\"" Jul 14 22:41:17.315601 env[1334]: time="2025-07-14T22:41:17.315566178Z" level=info msg="StartContainer for \"dfe49d7b785777a40a58c4892cc7046ae843d48b1cf7863ed0a4ee9bbc863efb\"" Jul 14 22:41:17.479933 env[1334]: time="2025-07-14T22:41:17.479845577Z" level=info msg="StartContainer for \"dfe49d7b785777a40a58c4892cc7046ae843d48b1cf7863ed0a4ee9bbc863efb\" returns successfully" Jul 14 22:41:17.562193 kubelet[2204]: E0714 22:41:17.562139 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:17.825990 kubelet[2204]: I0714 22:41:17.825898 2204 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-28c5x" podStartSLOduration=6.825875481 podStartE2EDuration="6.825875481s" podCreationTimestamp="2025-07-14 22:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:41:17.825734723 +0000 UTC m=+16.375256491" watchObservedRunningTime="2025-07-14 22:41:17.825875481 +0000 UTC m=+16.375397239" Jul 14 22:41:18.564558 kubelet[2204]: E0714 22:41:18.564518 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:24.756501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710253145.mount: Deactivated successfully. Jul 14 22:41:40.077939 env[1334]: time="2025-07-14T22:41:40.077860998Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:41:40.270161 env[1334]: time="2025-07-14T22:41:40.270107168Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:41:40.334228 env[1334]: time="2025-07-14T22:41:40.334082029Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:41:40.334781 env[1334]: time="2025-07-14T22:41:40.334757106Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 14 22:41:40.377470 env[1334]: time="2025-07-14T22:41:40.377436527Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 22:41:40.407882 env[1334]: time="2025-07-14T22:41:40.407805022Z" level=info msg="CreateContainer within sandbox \"29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 22:41:40.761464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141499788.mount: Deactivated successfully. Jul 14 22:41:41.181486 env[1334]: time="2025-07-14T22:41:41.181209360Z" level=info msg="CreateContainer within sandbox \"29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ede82103b6528a4e47cc5c4852c4b9cef35168a42f243764696fb4bc91706cf4\"" Jul 14 22:41:41.182100 env[1334]: time="2025-07-14T22:41:41.182059348Z" level=info msg="StartContainer for \"ede82103b6528a4e47cc5c4852c4b9cef35168a42f243764696fb4bc91706cf4\"" Jul 14 22:41:41.604005 env[1334]: time="2025-07-14T22:41:41.603956103Z" level=info msg="StartContainer for \"ede82103b6528a4e47cc5c4852c4b9cef35168a42f243764696fb4bc91706cf4\" returns successfully" Jul 14 22:41:41.759163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ede82103b6528a4e47cc5c4852c4b9cef35168a42f243764696fb4bc91706cf4-rootfs.mount: Deactivated successfully. Jul 14 22:41:42.637974 env[1334]: time="2025-07-14T22:41:42.637849395Z" level=error msg="collecting metrics for ede82103b6528a4e47cc5c4852c4b9cef35168a42f243764696fb4bc91706cf4" error="cgroups: cgroup deleted: unknown" Jul 14 22:41:43.256611 env[1334]: time="2025-07-14T22:41:43.256549616Z" level=info msg="shim disconnected" id=ede82103b6528a4e47cc5c4852c4b9cef35168a42f243764696fb4bc91706cf4 Jul 14 22:41:43.256611 env[1334]: time="2025-07-14T22:41:43.256611102Z" level=warning msg="cleaning up after shim disconnected" id=ede82103b6528a4e47cc5c4852c4b9cef35168a42f243764696fb4bc91706cf4 namespace=k8s.io Jul 14 22:41:43.256954 env[1334]: time="2025-07-14T22:41:43.256624547Z" level=info msg="cleaning up dead shim" Jul 14 22:41:43.264296 env[1334]: time="2025-07-14T22:41:43.264225279Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:41:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2606 runtime=io.containerd.runc.v2\n" Jul 14 22:41:43.737681 env[1334]: time="2025-07-14T22:41:43.737634399Z" level=info msg="StopPodSandbox for \"29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a\"" Jul 14 22:41:43.738155 env[1334]: time="2025-07-14T22:41:43.737689453Z" level=info msg="Container to stop \"ede82103b6528a4e47cc5c4852c4b9cef35168a42f243764696fb4bc91706cf4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:41:43.739736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a-shm.mount: Deactivated successfully. Jul 14 22:41:43.756944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a-rootfs.mount: Deactivated successfully. Jul 14 22:41:43.853579 env[1334]: time="2025-07-14T22:41:43.853526853Z" level=info msg="shim disconnected" id=29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a Jul 14 22:41:43.854332 env[1334]: time="2025-07-14T22:41:43.854296529Z" level=warning msg="cleaning up after shim disconnected" id=29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a namespace=k8s.io Jul 14 22:41:43.854332 env[1334]: time="2025-07-14T22:41:43.854315425Z" level=info msg="cleaning up dead shim" Jul 14 22:41:43.861198 env[1334]: time="2025-07-14T22:41:43.861132984Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:41:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2639 runtime=io.containerd.runc.v2\n" Jul 14 22:41:43.861535 env[1334]: time="2025-07-14T22:41:43.861504226Z" level=info msg="TearDown network for sandbox \"29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a\" successfully" Jul 14 22:41:43.861535 env[1334]: time="2025-07-14T22:41:43.861532399Z" level=info msg="StopPodSandbox for \"29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a\" returns successfully" Jul 14 22:41:43.901608 kubelet[2204]: I0714 22:41:43.901522 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-cilium-run\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.901608 kubelet[2204]: I0714 22:41:43.901584 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-lib-modules\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.901608 kubelet[2204]: I0714 22:41:43.901619 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6rbp\" (UniqueName: \"kubernetes.io/projected/1ea3f25f-b892-432c-a399-1228ee2630d2-kube-api-access-m6rbp\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.902187 kubelet[2204]: I0714 22:41:43.901641 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-cilium-cgroup\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.902187 kubelet[2204]: I0714 22:41:43.901660 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-hostproc\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.902187 kubelet[2204]: I0714 22:41:43.901677 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-host-proc-sys-kernel\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.902187 kubelet[2204]: I0714 22:41:43.901680 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:41:43.902187 kubelet[2204]: I0714 22:41:43.901724 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:41:43.902404 kubelet[2204]: I0714 22:41:43.901685 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:41:43.902404 kubelet[2204]: I0714 22:41:43.901700 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ea3f25f-b892-432c-a399-1228ee2630d2-cilium-config-path\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.902404 kubelet[2204]: I0714 22:41:43.901797 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ea3f25f-b892-432c-a399-1228ee2630d2-clustermesh-secrets\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.902404 kubelet[2204]: I0714 22:41:43.901826 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-bpf-maps\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.902404 kubelet[2204]: I0714 22:41:43.901848 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-xtables-lock\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.902404 kubelet[2204]: I0714 22:41:43.901870 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ea3f25f-b892-432c-a399-1228ee2630d2-hubble-tls\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.902627 kubelet[2204]: I0714 22:41:43.901892 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-cni-path\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.902627 kubelet[2204]: I0714 22:41:43.901910 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-host-proc-sys-net\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.902627 kubelet[2204]: I0714 22:41:43.901935 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-etc-cni-netd\") pod \"1ea3f25f-b892-432c-a399-1228ee2630d2\" (UID: \"1ea3f25f-b892-432c-a399-1228ee2630d2\") " Jul 14 22:41:43.902627 kubelet[2204]: I0714 22:41:43.901977 2204 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:43.902627 kubelet[2204]: I0714 22:41:43.901990 2204 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:43.902627 kubelet[2204]: I0714 22:41:43.902003 2204 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:43.902627 kubelet[2204]: I0714 22:41:43.902028 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:41:43.902916 kubelet[2204]: I0714 22:41:43.902037 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-hostproc" (OuterVolumeSpecName: "hostproc") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:41:43.902916 kubelet[2204]: I0714 22:41:43.902592 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:41:43.902916 kubelet[2204]: I0714 22:41:43.902614 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:41:43.902916 kubelet[2204]: I0714 22:41:43.902629 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-cni-path" (OuterVolumeSpecName: "cni-path") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:41:43.902916 kubelet[2204]: I0714 22:41:43.902650 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:41:43.903074 kubelet[2204]: I0714 22:41:43.902668 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:41:43.904411 kubelet[2204]: I0714 22:41:43.904360 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea3f25f-b892-432c-a399-1228ee2630d2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 22:41:43.906512 kubelet[2204]: I0714 22:41:43.906327 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea3f25f-b892-432c-a399-1228ee2630d2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 22:41:43.906512 kubelet[2204]: I0714 22:41:43.906446 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea3f25f-b892-432c-a399-1228ee2630d2-kube-api-access-m6rbp" (OuterVolumeSpecName: "kube-api-access-m6rbp") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "kube-api-access-m6rbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:41:43.906757 kubelet[2204]: I0714 22:41:43.906732 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea3f25f-b892-432c-a399-1228ee2630d2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1ea3f25f-b892-432c-a399-1228ee2630d2" (UID: "1ea3f25f-b892-432c-a399-1228ee2630d2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:41:43.906862 systemd[1]: var-lib-kubelet-pods-1ea3f25f\x2db892\x2d432c\x2da399\x2d1228ee2630d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm6rbp.mount: Deactivated successfully. Jul 14 22:41:43.907021 systemd[1]: var-lib-kubelet-pods-1ea3f25f\x2db892\x2d432c\x2da399\x2d1228ee2630d2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 22:41:43.909470 systemd[1]: var-lib-kubelet-pods-1ea3f25f\x2db892\x2d432c\x2da399\x2d1228ee2630d2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 22:41:44.004323 kubelet[2204]: I0714 22:41:44.002948 2204 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:44.004323 kubelet[2204]: I0714 22:41:44.002992 2204 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:44.004323 kubelet[2204]: I0714 22:41:44.003004 2204 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:44.004323 kubelet[2204]: I0714 22:41:44.003016 2204 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6rbp\" (UniqueName: \"kubernetes.io/projected/1ea3f25f-b892-432c-a399-1228ee2630d2-kube-api-access-m6rbp\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:44.004323 kubelet[2204]: I0714 22:41:44.003027 2204 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:44.004323 kubelet[2204]: I0714 22:41:44.003036 2204 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:44.004323 kubelet[2204]: I0714 22:41:44.003047 2204 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ea3f25f-b892-432c-a399-1228ee2630d2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:44.004323 kubelet[2204]: I0714 22:41:44.003056 2204 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ea3f25f-b892-432c-a399-1228ee2630d2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:44.004758 kubelet[2204]: I0714 22:41:44.003065 2204 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:44.004758 kubelet[2204]: I0714 22:41:44.003074 2204 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ea3f25f-b892-432c-a399-1228ee2630d2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:44.004758 kubelet[2204]: I0714 22:41:44.003083 2204 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ea3f25f-b892-432c-a399-1228ee2630d2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 22:41:44.739879 kubelet[2204]: I0714 22:41:44.739838 2204 scope.go:117] "RemoveContainer" containerID="ede82103b6528a4e47cc5c4852c4b9cef35168a42f243764696fb4bc91706cf4" Jul 14 22:41:44.741470 env[1334]: time="2025-07-14T22:41:44.741432478Z" level=info msg="RemoveContainer for \"ede82103b6528a4e47cc5c4852c4b9cef35168a42f243764696fb4bc91706cf4\"" Jul 14 22:41:45.202896 env[1334]: time="2025-07-14T22:41:45.202852977Z" level=info msg="RemoveContainer for \"ede82103b6528a4e47cc5c4852c4b9cef35168a42f243764696fb4bc91706cf4\" returns successfully" Jul 14 22:41:45.328075 kubelet[2204]: E0714 22:41:45.326918 2204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1ea3f25f-b892-432c-a399-1228ee2630d2" containerName="mount-cgroup" Jul 14 22:41:45.328075 kubelet[2204]: I0714 22:41:45.326974 2204 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea3f25f-b892-432c-a399-1228ee2630d2" containerName="mount-cgroup" Jul 14 22:41:45.411796 kubelet[2204]: I0714 22:41:45.411749 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cilium-cgroup\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.411796 kubelet[2204]: I0714 22:41:45.411786 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-host-proc-sys-net\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.411796 kubelet[2204]: I0714 22:41:45.411803 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-hubble-tls\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.411796 kubelet[2204]: I0714 22:41:45.411815 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-hostproc\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.412052 kubelet[2204]: I0714 22:41:45.411831 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-etc-cni-netd\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.412052 kubelet[2204]: I0714 22:41:45.411844 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-xtables-lock\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.412052 kubelet[2204]: I0714 22:41:45.411856 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-bpf-maps\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.412052 kubelet[2204]: I0714 22:41:45.411870 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-lib-modules\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.412052 kubelet[2204]: I0714 22:41:45.411884 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cilium-config-path\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.412052 kubelet[2204]: I0714 22:41:45.411900 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-host-proc-sys-kernel\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.412194 kubelet[2204]: I0714 22:41:45.411939 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwd9h\" (UniqueName: \"kubernetes.io/projected/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-kube-api-access-rwd9h\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.412194 kubelet[2204]: I0714 22:41:45.411955 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cilium-run\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.412194 kubelet[2204]: I0714 22:41:45.411968 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cni-path\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.412194 kubelet[2204]: I0714 22:41:45.411982 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-clustermesh-secrets\") pod \"cilium-zzk9l\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " pod="kube-system/cilium-zzk9l" Jul 14 22:41:45.538428 kubelet[2204]: I0714 22:41:45.537828 2204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea3f25f-b892-432c-a399-1228ee2630d2" path="/var/lib/kubelet/pods/1ea3f25f-b892-432c-a399-1228ee2630d2/volumes" Jul 14 22:41:45.630348 kubelet[2204]: E0714 22:41:45.630304 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:45.630966 env[1334]: time="2025-07-14T22:41:45.630904775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zzk9l,Uid:ce4ac92b-addb-4d36-ace7-9a52e3bf725e,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:46.776760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947255818.mount: Deactivated successfully. Jul 14 22:41:48.695175 env[1334]: time="2025-07-14T22:41:48.695101255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:41:48.695175 env[1334]: time="2025-07-14T22:41:48.695155096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:41:48.695175 env[1334]: time="2025-07-14T22:41:48.695165857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:41:48.695560 env[1334]: time="2025-07-14T22:41:48.695383598Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58 pid=2668 runtime=io.containerd.runc.v2 Jul 14 22:41:48.730925 env[1334]: time="2025-07-14T22:41:48.730874886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zzk9l,Uid:ce4ac92b-addb-4d36-ace7-9a52e3bf725e,Namespace:kube-system,Attempt:0,} returns sandbox id \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\"" Jul 14 22:41:48.732284 kubelet[2204]: E0714 22:41:48.731884 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:48.733381 env[1334]: time="2025-07-14T22:41:48.733331059Z" level=info msg="CreateContainer within sandbox \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 22:41:49.072687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054504313.mount: Deactivated successfully. Jul 14 22:41:49.518284 env[1334]: time="2025-07-14T22:41:49.518202327Z" level=info msg="CreateContainer within sandbox \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e086f4df9b09ad36f3d4fed3b271368f794af4e23e19d0c1997fe46d9644957e\"" Jul 14 22:41:49.518792 env[1334]: time="2025-07-14T22:41:49.518751585Z" level=info msg="StartContainer for \"e086f4df9b09ad36f3d4fed3b271368f794af4e23e19d0c1997fe46d9644957e\"" Jul 14 22:41:49.706974 env[1334]: time="2025-07-14T22:41:49.706876581Z" level=info msg="StartContainer for \"e086f4df9b09ad36f3d4fed3b271368f794af4e23e19d0c1997fe46d9644957e\" returns successfully" Jul 14 22:41:49.721179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e086f4df9b09ad36f3d4fed3b271368f794af4e23e19d0c1997fe46d9644957e-rootfs.mount: Deactivated successfully. Jul 14 22:41:49.750372 kubelet[2204]: E0714 22:41:49.750340 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:50.501594 env[1334]: time="2025-07-14T22:41:50.501536935Z" level=info msg="shim disconnected" id=e086f4df9b09ad36f3d4fed3b271368f794af4e23e19d0c1997fe46d9644957e Jul 14 22:41:50.501594 env[1334]: time="2025-07-14T22:41:50.501589955Z" level=warning msg="cleaning up after shim disconnected" id=e086f4df9b09ad36f3d4fed3b271368f794af4e23e19d0c1997fe46d9644957e namespace=k8s.io Jul 14 22:41:50.501594 env[1334]: time="2025-07-14T22:41:50.501601227Z" level=info msg="cleaning up dead shim" Jul 14 22:41:50.509420 env[1334]: time="2025-07-14T22:41:50.509372875Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:41:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2750 runtime=io.containerd.runc.v2\n" Jul 14 22:41:50.753170 kubelet[2204]: E0714 22:41:50.753031 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:50.756049 env[1334]: time="2025-07-14T22:41:50.756007996Z" level=info msg="CreateContainer within sandbox \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 22:41:51.541460 env[1334]: time="2025-07-14T22:41:51.541328692Z" level=info msg="CreateContainer within sandbox \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3dc1b8fe1051ca2c724be2cc7edba074d8c8e6434e6decc7904c774c7829694d\"" Jul 14 22:41:51.542298 env[1334]: time="2025-07-14T22:41:51.541903659Z" level=info msg="StartContainer for \"3dc1b8fe1051ca2c724be2cc7edba074d8c8e6434e6decc7904c774c7829694d\"" Jul 14 22:41:51.613561 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:41:51.613820 systemd[1]: Stopped systemd-sysctl.service. Jul 14 22:41:51.615731 systemd[1]: Stopping systemd-sysctl.service... Jul 14 22:41:51.617253 systemd[1]: Starting systemd-sysctl.service... Jul 14 22:41:51.619205 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 22:41:51.625583 systemd[1]: Finished systemd-sysctl.service. Jul 14 22:41:51.643118 env[1334]: time="2025-07-14T22:41:51.643007451Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:41:51.643118 env[1334]: time="2025-07-14T22:41:51.643099484Z" level=info msg="StartContainer for \"3dc1b8fe1051ca2c724be2cc7edba074d8c8e6434e6decc7904c774c7829694d\" returns successfully" Jul 14 22:41:51.756721 kubelet[2204]: E0714 22:41:51.756684 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:52.160019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dc1b8fe1051ca2c724be2cc7edba074d8c8e6434e6decc7904c774c7829694d-rootfs.mount: Deactivated successfully. Jul 14 22:41:52.184037 env[1334]: time="2025-07-14T22:41:52.183978724Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:41:52.374258 env[1334]: time="2025-07-14T22:41:52.374203396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:41:52.374922 env[1334]: time="2025-07-14T22:41:52.374895364Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 14 22:41:52.376834 env[1334]: time="2025-07-14T22:41:52.376808438Z" level=info msg="CreateContainer within sandbox \"9a238712683410753122aa96714c59362aa58823c4eac66655fed8f4ae856226\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 22:41:52.399349 env[1334]: time="2025-07-14T22:41:52.399285028Z" level=info msg="shim disconnected" id=3dc1b8fe1051ca2c724be2cc7edba074d8c8e6434e6decc7904c774c7829694d Jul 14 22:41:52.399349 env[1334]: time="2025-07-14T22:41:52.399346124Z" level=warning msg="cleaning up after shim disconnected" id=3dc1b8fe1051ca2c724be2cc7edba074d8c8e6434e6decc7904c774c7829694d namespace=k8s.io Jul 14 22:41:52.399548 env[1334]: time="2025-07-14T22:41:52.399360521Z" level=info msg="cleaning up dead shim" Jul 14 22:41:52.405628 env[1334]: time="2025-07-14T22:41:52.405557079Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:41:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2815 runtime=io.containerd.runc.v2\n" Jul 14 22:41:52.758965 kubelet[2204]: E0714 22:41:52.758921 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:52.764281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183607933.mount: Deactivated successfully. Jul 14 22:41:52.769121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668088584.mount: Deactivated successfully. Jul 14 22:41:52.776431 env[1334]: time="2025-07-14T22:41:52.776389383Z" level=info msg="CreateContainer within sandbox \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 22:41:53.106320 env[1334]: time="2025-07-14T22:41:53.106167742Z" level=info msg="CreateContainer within sandbox \"9a238712683410753122aa96714c59362aa58823c4eac66655fed8f4ae856226\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"099ddb3cf0d30da2b5742c80099f9a5ca3e8a09607165a663d58e99b826c7f3b\"" Jul 14 22:41:53.106799 env[1334]: time="2025-07-14T22:41:53.106760091Z" level=info msg="StartContainer for \"099ddb3cf0d30da2b5742c80099f9a5ca3e8a09607165a663d58e99b826c7f3b\"" Jul 14 22:41:53.299477 env[1334]: time="2025-07-14T22:41:53.299415263Z" level=info msg="StartContainer for \"099ddb3cf0d30da2b5742c80099f9a5ca3e8a09607165a663d58e99b826c7f3b\" returns successfully" Jul 14 22:41:53.761341 kubelet[2204]: E0714 22:41:53.761310 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:53.971629 env[1334]: time="2025-07-14T22:41:53.971572552Z" level=info msg="CreateContainer within sandbox \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"69c722608797d9d6122f410883b2f39840826908fa52bfd4810c766a00149771\"" Jul 14 22:41:53.971995 env[1334]: time="2025-07-14T22:41:53.971968180Z" level=info msg="StartContainer for \"69c722608797d9d6122f410883b2f39840826908fa52bfd4810c766a00149771\"" Jul 14 22:41:54.139236 env[1334]: time="2025-07-14T22:41:54.139101231Z" level=info msg="StartContainer for \"69c722608797d9d6122f410883b2f39840826908fa52bfd4810c766a00149771\" returns successfully" Jul 14 22:41:54.159403 systemd[1]: run-containerd-runc-k8s.io-69c722608797d9d6122f410883b2f39840826908fa52bfd4810c766a00149771-runc.uSAZtx.mount: Deactivated successfully. Jul 14 22:41:54.159547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69c722608797d9d6122f410883b2f39840826908fa52bfd4810c766a00149771-rootfs.mount: Deactivated successfully. Jul 14 22:41:54.372974 env[1334]: time="2025-07-14T22:41:54.372921062Z" level=info msg="shim disconnected" id=69c722608797d9d6122f410883b2f39840826908fa52bfd4810c766a00149771 Jul 14 22:41:54.372974 env[1334]: time="2025-07-14T22:41:54.372970505Z" level=warning msg="cleaning up after shim disconnected" id=69c722608797d9d6122f410883b2f39840826908fa52bfd4810c766a00149771 namespace=k8s.io Jul 14 22:41:54.372974 env[1334]: time="2025-07-14T22:41:54.372979653Z" level=info msg="cleaning up dead shim" Jul 14 22:41:54.380698 env[1334]: time="2025-07-14T22:41:54.380643119Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:41:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2910 runtime=io.containerd.runc.v2\n" Jul 14 22:41:54.458948 kubelet[2204]: I0714 22:41:54.458854 2204 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-d4nbf" podStartSLOduration=6.508156384 podStartE2EDuration="42.458816825s" podCreationTimestamp="2025-07-14 22:41:12 +0000 UTC" firstStartedPulling="2025-07-14 22:41:16.425100026 +0000 UTC m=+14.974621795" lastFinishedPulling="2025-07-14 22:41:52.375760467 +0000 UTC m=+50.925282236" observedRunningTime="2025-07-14 22:41:54.45839625 +0000 UTC m=+53.007918038" watchObservedRunningTime="2025-07-14 22:41:54.458816825 +0000 UTC m=+53.008338613" Jul 14 22:41:54.770461 kubelet[2204]: E0714 22:41:54.770334 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:54.771161 kubelet[2204]: E0714 22:41:54.770649 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:54.783370 env[1334]: time="2025-07-14T22:41:54.783324341Z" level=info msg="CreateContainer within sandbox \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 22:41:54.943519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount112291966.mount: Deactivated successfully. Jul 14 22:41:55.098597 env[1334]: time="2025-07-14T22:41:55.098475450Z" level=info msg="CreateContainer within sandbox \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"71dc22aab594a6691e104a0de964e58c80e7f8a15a012fb68737416940bd2120\"" Jul 14 22:41:55.099745 env[1334]: time="2025-07-14T22:41:55.099704342Z" level=info msg="StartContainer for \"71dc22aab594a6691e104a0de964e58c80e7f8a15a012fb68737416940bd2120\"" Jul 14 22:41:55.238908 env[1334]: time="2025-07-14T22:41:55.238829207Z" level=info msg="StartContainer for \"71dc22aab594a6691e104a0de964e58c80e7f8a15a012fb68737416940bd2120\" returns successfully" Jul 14 22:41:55.250958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71dc22aab594a6691e104a0de964e58c80e7f8a15a012fb68737416940bd2120-rootfs.mount: Deactivated successfully. Jul 14 22:41:55.727541 env[1334]: time="2025-07-14T22:41:55.727486938Z" level=info msg="shim disconnected" id=71dc22aab594a6691e104a0de964e58c80e7f8a15a012fb68737416940bd2120 Jul 14 22:41:55.727541 env[1334]: time="2025-07-14T22:41:55.727536021Z" level=warning msg="cleaning up after shim disconnected" id=71dc22aab594a6691e104a0de964e58c80e7f8a15a012fb68737416940bd2120 namespace=k8s.io Jul 14 22:41:55.727541 env[1334]: time="2025-07-14T22:41:55.727546621Z" level=info msg="cleaning up dead shim" Jul 14 22:41:55.733738 env[1334]: time="2025-07-14T22:41:55.733680266Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:41:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2966 runtime=io.containerd.runc.v2\n" Jul 14 22:41:55.905081 kubelet[2204]: E0714 22:41:55.905038 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:55.907869 env[1334]: time="2025-07-14T22:41:55.907803299Z" level=info msg="CreateContainer within sandbox \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 22:41:56.462747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1076049907.mount: Deactivated successfully. Jul 14 22:41:57.060193 env[1334]: time="2025-07-14T22:41:57.060114567Z" level=info msg="CreateContainer within sandbox \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a\"" Jul 14 22:41:57.060691 env[1334]: time="2025-07-14T22:41:57.060668463Z" level=info msg="StartContainer for \"5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a\"" Jul 14 22:41:57.233183 env[1334]: time="2025-07-14T22:41:57.233103195Z" level=info msg="StartContainer for \"5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a\" returns successfully" Jul 14 22:41:57.336986 kubelet[2204]: I0714 22:41:57.336898 2204 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 22:41:57.459404 systemd[1]: run-containerd-runc-k8s.io-5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a-runc.1CvATC.mount: Deactivated successfully. Jul 14 22:41:57.571031 kubelet[2204]: W0714 22:41:57.570984 2204 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 14 22:41:57.571344 kubelet[2204]: E0714 22:41:57.571046 2204 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 14 22:41:57.605234 kubelet[2204]: I0714 22:41:57.605103 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55cb3708-b450-4674-a8b0-9b632af58c9f-config-volume\") pod \"coredns-7c65d6cfc9-hjwh2\" (UID: \"55cb3708-b450-4674-a8b0-9b632af58c9f\") " pod="kube-system/coredns-7c65d6cfc9-hjwh2" Jul 14 22:41:57.605234 kubelet[2204]: I0714 22:41:57.605140 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-696bm\" (UniqueName: \"kubernetes.io/projected/55cb3708-b450-4674-a8b0-9b632af58c9f-kube-api-access-696bm\") pod \"coredns-7c65d6cfc9-hjwh2\" (UID: \"55cb3708-b450-4674-a8b0-9b632af58c9f\") " pod="kube-system/coredns-7c65d6cfc9-hjwh2" Jul 14 22:41:57.605234 kubelet[2204]: I0714 22:41:57.605159 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28b6a5ac-772b-498b-84f1-97cdad50d82a-config-volume\") pod \"coredns-7c65d6cfc9-ljjq5\" (UID: \"28b6a5ac-772b-498b-84f1-97cdad50d82a\") " pod="kube-system/coredns-7c65d6cfc9-ljjq5" Jul 14 22:41:57.605234 kubelet[2204]: I0714 22:41:57.605174 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2tfx\" (UniqueName: \"kubernetes.io/projected/28b6a5ac-772b-498b-84f1-97cdad50d82a-kube-api-access-p2tfx\") pod \"coredns-7c65d6cfc9-ljjq5\" (UID: \"28b6a5ac-772b-498b-84f1-97cdad50d82a\") " pod="kube-system/coredns-7c65d6cfc9-ljjq5" Jul 14 22:41:57.910807 kubelet[2204]: E0714 22:41:57.910690 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:57.925018 kubelet[2204]: I0714 22:41:57.924952 2204 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zzk9l" podStartSLOduration=12.924932641 podStartE2EDuration="12.924932641s" podCreationTimestamp="2025-07-14 22:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:41:57.924760806 +0000 UTC m=+56.474282574" watchObservedRunningTime="2025-07-14 22:41:57.924932641 +0000 UTC m=+56.474454399" Jul 14 22:41:58.706995 kubelet[2204]: E0714 22:41:58.706934 2204 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 14 22:41:58.707571 kubelet[2204]: E0714 22:41:58.707057 2204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55cb3708-b450-4674-a8b0-9b632af58c9f-config-volume podName:55cb3708-b450-4674-a8b0-9b632af58c9f nodeName:}" failed. No retries permitted until 2025-07-14 22:41:59.207029092 +0000 UTC m=+57.756550870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/55cb3708-b450-4674-a8b0-9b632af58c9f-config-volume") pod "coredns-7c65d6cfc9-hjwh2" (UID: "55cb3708-b450-4674-a8b0-9b632af58c9f") : failed to sync configmap cache: timed out waiting for the condition Jul 14 22:41:58.707571 kubelet[2204]: E0714 22:41:58.706934 2204 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 14 22:41:58.707571 kubelet[2204]: E0714 22:41:58.707123 2204 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28b6a5ac-772b-498b-84f1-97cdad50d82a-config-volume podName:28b6a5ac-772b-498b-84f1-97cdad50d82a nodeName:}" failed. No retries permitted until 2025-07-14 22:41:59.207108792 +0000 UTC m=+57.756630560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/28b6a5ac-772b-498b-84f1-97cdad50d82a-config-volume") pod "coredns-7c65d6cfc9-ljjq5" (UID: "28b6a5ac-772b-498b-84f1-97cdad50d82a") : failed to sync configmap cache: timed out waiting for the condition Jul 14 22:41:58.912678 kubelet[2204]: E0714 22:41:58.912648 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:59.288335 systemd-networkd[1100]: cilium_host: Link UP Jul 14 22:41:59.288439 systemd-networkd[1100]: cilium_net: Link UP Jul 14 22:41:59.288442 systemd-networkd[1100]: cilium_net: Gained carrier Jul 14 22:41:59.289534 systemd-networkd[1100]: cilium_host: Gained carrier Jul 14 22:41:59.291010 systemd-networkd[1100]: cilium_host: Gained IPv6LL Jul 14 22:41:59.291443 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 14 22:41:59.307164 systemd-networkd[1100]: cilium_net: Gained IPv6LL Jul 14 22:41:59.357596 systemd-networkd[1100]: cilium_vxlan: Link UP Jul 14 22:41:59.357605 systemd-networkd[1100]: cilium_vxlan: Gained carrier Jul 14 22:41:59.373459 kubelet[2204]: E0714 22:41:59.373427 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:59.374524 env[1334]: time="2025-07-14T22:41:59.373920814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ljjq5,Uid:28b6a5ac-772b-498b-84f1-97cdad50d82a,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:59.374830 kubelet[2204]: E0714 22:41:59.374382 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:41:59.374881 env[1334]: time="2025-07-14T22:41:59.374694986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hjwh2,Uid:55cb3708-b450-4674-a8b0-9b632af58c9f,Namespace:kube-system,Attempt:0,}" Jul 14 22:41:59.573307 kernel: NET: Registered PF_ALG protocol family Jul 14 22:41:59.914286 kubelet[2204]: E0714 22:41:59.914227 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:00.079840 systemd-networkd[1100]: lxc_health: Link UP Jul 14 22:42:00.120831 systemd-networkd[1100]: lxc_health: Gained carrier Jul 14 22:42:00.121295 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 14 22:42:00.436973 systemd-networkd[1100]: lxc7af5d91efa2e: Link UP Jul 14 22:42:00.447356 kernel: eth0: renamed from tmpb25b7 Jul 14 22:42:00.460256 systemd-networkd[1100]: lxced1c9e2e68fb: Link UP Jul 14 22:42:00.468315 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 22:42:00.468373 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7af5d91efa2e: link becomes ready Jul 14 22:42:00.468462 systemd-networkd[1100]: lxc7af5d91efa2e: Gained carrier Jul 14 22:42:00.471318 kernel: eth0: renamed from tmpfba26 Jul 14 22:42:00.479553 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 22:42:00.479680 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxced1c9e2e68fb: link becomes ready Jul 14 22:42:00.480665 systemd-networkd[1100]: lxced1c9e2e68fb: Gained carrier Jul 14 22:42:00.709521 systemd-networkd[1100]: cilium_vxlan: Gained IPv6LL Jul 14 22:42:01.518844 env[1334]: time="2025-07-14T22:42:01.518805169Z" level=info msg="StopPodSandbox for \"29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a\"" Jul 14 22:42:01.519194 env[1334]: time="2025-07-14T22:42:01.518881823Z" level=info msg="TearDown network for sandbox \"29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a\" successfully" Jul 14 22:42:01.519194 env[1334]: time="2025-07-14T22:42:01.518911280Z" level=info msg="StopPodSandbox for \"29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a\" returns successfully" Jul 14 22:42:01.519287 env[1334]: time="2025-07-14T22:42:01.519228679Z" level=info msg="RemovePodSandbox for \"29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a\"" Jul 14 22:42:01.519327 env[1334]: time="2025-07-14T22:42:01.519287580Z" level=info msg="Forcibly stopping sandbox \"29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a\"" Jul 14 22:42:01.519392 env[1334]: time="2025-07-14T22:42:01.519374304Z" level=info msg="TearDown network for sandbox \"29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a\" successfully" Jul 14 22:42:01.522353 env[1334]: time="2025-07-14T22:42:01.522323280Z" level=info msg="RemovePodSandbox \"29cde49264240907554325f5562f912cd0396bab1a335116f006169990e8c56a\" returns successfully" Jul 14 22:42:01.632032 kubelet[2204]: E0714 22:42:01.631995 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:01.914451 systemd-networkd[1100]: lxced1c9e2e68fb: Gained IPv6LL Jul 14 22:42:01.916756 kubelet[2204]: E0714 22:42:01.916736 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:02.042391 systemd-networkd[1100]: lxc_health: Gained IPv6LL Jul 14 22:42:02.490400 systemd-networkd[1100]: lxc7af5d91efa2e: Gained IPv6LL Jul 14 22:42:02.707647 systemd[1]: run-containerd-runc-k8s.io-5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a-runc.8pgqxb.mount: Deactivated successfully. Jul 14 22:42:03.853132 env[1334]: time="2025-07-14T22:42:03.853049936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:42:03.853132 env[1334]: time="2025-07-14T22:42:03.853096624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:42:03.853132 env[1334]: time="2025-07-14T22:42:03.853107114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:42:03.853716 env[1334]: time="2025-07-14T22:42:03.853659677Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fba269267aac7294a046f5233257cf812d4ca606e4576d6232eb3fec94be8096 pid=3594 runtime=io.containerd.runc.v2 Jul 14 22:42:03.874110 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:42:03.895714 env[1334]: time="2025-07-14T22:42:03.895672981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hjwh2,Uid:55cb3708-b450-4674-a8b0-9b632af58c9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fba269267aac7294a046f5233257cf812d4ca606e4576d6232eb3fec94be8096\"" Jul 14 22:42:03.896694 kubelet[2204]: E0714 22:42:03.896213 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:03.898573 env[1334]: time="2025-07-14T22:42:03.898515826Z" level=info msg="CreateContainer within sandbox \"fba269267aac7294a046f5233257cf812d4ca606e4576d6232eb3fec94be8096\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:42:03.995775 env[1334]: time="2025-07-14T22:42:03.995702601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:42:03.995775 env[1334]: time="2025-07-14T22:42:03.995750611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:42:03.995775 env[1334]: time="2025-07-14T22:42:03.995763575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:42:03.996009 env[1334]: time="2025-07-14T22:42:03.995967671Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b25b7057e4ee98400d5ba757e822a6cac6e6140a29447b2d71942191088a5219 pid=3635 runtime=io.containerd.runc.v2 Jul 14 22:42:04.018035 systemd-resolved[1239]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:42:04.037758 env[1334]: time="2025-07-14T22:42:04.037705949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ljjq5,Uid:28b6a5ac-772b-498b-84f1-97cdad50d82a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b25b7057e4ee98400d5ba757e822a6cac6e6140a29447b2d71942191088a5219\"" Jul 14 22:42:04.038296 kubelet[2204]: E0714 22:42:04.038254 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:04.039585 env[1334]: time="2025-07-14T22:42:04.039543337Z" level=info msg="CreateContainer within sandbox \"b25b7057e4ee98400d5ba757e822a6cac6e6140a29447b2d71942191088a5219\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:42:04.365629 env[1334]: time="2025-07-14T22:42:04.365586369Z" level=info msg="CreateContainer within sandbox \"fba269267aac7294a046f5233257cf812d4ca606e4576d6232eb3fec94be8096\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"64704cfa4f52d97301d39efffe245e9d0387e8cf776a40552889428c15c12a15\"" Jul 14 22:42:04.366536 env[1334]: time="2025-07-14T22:42:04.366460349Z" level=info msg="StartContainer for \"64704cfa4f52d97301d39efffe245e9d0387e8cf776a40552889428c15c12a15\"" Jul 14 22:42:04.370651 env[1334]: time="2025-07-14T22:42:04.370593589Z" level=info msg="CreateContainer within sandbox \"b25b7057e4ee98400d5ba757e822a6cac6e6140a29447b2d71942191088a5219\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd879190e9b00e9173d15b1603f1b9148d6d5d0cafd3aded8c942073f5dca075\"" Jul 14 22:42:04.371545 env[1334]: time="2025-07-14T22:42:04.371515058Z" level=info msg="StartContainer for \"dd879190e9b00e9173d15b1603f1b9148d6d5d0cafd3aded8c942073f5dca075\"" Jul 14 22:42:04.410102 env[1334]: time="2025-07-14T22:42:04.410057525Z" level=info msg="StartContainer for \"dd879190e9b00e9173d15b1603f1b9148d6d5d0cafd3aded8c942073f5dca075\" returns successfully" Jul 14 22:42:04.412550 env[1334]: time="2025-07-14T22:42:04.412505274Z" level=info msg="StartContainer for \"64704cfa4f52d97301d39efffe245e9d0387e8cf776a40552889428c15c12a15\" returns successfully" Jul 14 22:42:04.923552 kubelet[2204]: E0714 22:42:04.923518 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:04.925816 kubelet[2204]: E0714 22:42:04.925779 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:05.135063 kubelet[2204]: I0714 22:42:05.134983 2204 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ljjq5" podStartSLOduration=53.134958554 podStartE2EDuration="53.134958554s" podCreationTimestamp="2025-07-14 22:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:42:05.035142259 +0000 UTC m=+63.584664027" watchObservedRunningTime="2025-07-14 22:42:05.134958554 +0000 UTC m=+63.684480322" Jul 14 22:42:05.151232 kubelet[2204]: I0714 22:42:05.151170 2204 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hjwh2" podStartSLOduration=53.151150269 podStartE2EDuration="53.151150269s" podCreationTimestamp="2025-07-14 22:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:42:05.15088573 +0000 UTC m=+63.700407498" watchObservedRunningTime="2025-07-14 22:42:05.151150269 +0000 UTC m=+63.700672037" Jul 14 22:42:05.927921 kubelet[2204]: E0714 22:42:05.927894 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:05.928360 kubelet[2204]: E0714 22:42:05.927966 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:06.929092 kubelet[2204]: E0714 22:42:06.929064 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:06.929554 kubelet[2204]: E0714 22:42:06.929157 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:07.495367 sudo[1446]: pam_unix(sudo:session): session closed for user root Jul 14 22:42:07.499353 sshd[1439]: pam_unix(sshd:session): session closed for user core Jul 14 22:42:07.501727 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:39284.service: Deactivated successfully. Jul 14 22:42:07.502554 systemd-logind[1314]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:42:07.502585 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:42:07.503292 systemd-logind[1314]: Removed session 5. Jul 14 22:42:14.529782 kubelet[2204]: E0714 22:42:14.529732 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:16.530045 kubelet[2204]: E0714 22:42:16.530006 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:21.530189 kubelet[2204]: E0714 22:42:21.530142 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:33.529557 kubelet[2204]: E0714 22:42:33.529511 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:42:37.800574 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:49962.service. Jul 14 22:42:37.835881 sshd[3822]: Accepted publickey for core from 10.0.0.1 port 49962 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:42:37.837059 sshd[3822]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:42:37.840943 systemd-logind[1314]: New session 6 of user core. Jul 14 22:42:37.841861 systemd[1]: Started session-6.scope. Jul 14 22:42:37.955807 sshd[3822]: pam_unix(sshd:session): session closed for user core Jul 14 22:42:37.957997 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:49962.service: Deactivated successfully. Jul 14 22:42:37.959345 systemd-logind[1314]: Session 6 logged out. Waiting for processes to exit. Jul 14 22:42:37.959432 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 22:42:37.960317 systemd-logind[1314]: Removed session 6. Jul 14 22:42:42.959353 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:35704.service. Jul 14 22:42:42.992835 sshd[3837]: Accepted publickey for core from 10.0.0.1 port 35704 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:42:42.993905 sshd[3837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:42:42.997158 systemd-logind[1314]: New session 7 of user core. Jul 14 22:42:42.998157 systemd[1]: Started session-7.scope. Jul 14 22:42:43.121823 sshd[3837]: pam_unix(sshd:session): session closed for user core Jul 14 22:42:43.124049 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:35704.service: Deactivated successfully. Jul 14 22:42:43.124918 systemd-logind[1314]: Session 7 logged out. Waiting for processes to exit. Jul 14 22:42:43.124961 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 22:42:43.125604 systemd-logind[1314]: Removed session 7. Jul 14 22:42:48.126114 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:35732.service. Jul 14 22:42:48.160931 sshd[3854]: Accepted publickey for core from 10.0.0.1 port 35732 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:42:48.162006 sshd[3854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:42:48.165039 systemd-logind[1314]: New session 8 of user core. Jul 14 22:42:48.165762 systemd[1]: Started session-8.scope. Jul 14 22:42:48.266408 sshd[3854]: pam_unix(sshd:session): session closed for user core Jul 14 22:42:48.269002 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:35732.service: Deactivated successfully. Jul 14 22:42:48.270037 systemd-logind[1314]: Session 8 logged out. Waiting for processes to exit. Jul 14 22:42:48.270093 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 22:42:48.270886 systemd-logind[1314]: Removed session 8. Jul 14 22:42:53.269667 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:35740.service. Jul 14 22:42:53.301329 sshd[3869]: Accepted publickey for core from 10.0.0.1 port 35740 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:42:53.302213 sshd[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:42:53.305948 systemd-logind[1314]: New session 9 of user core. Jul 14 22:42:53.306683 systemd[1]: Started session-9.scope. Jul 14 22:42:53.409519 sshd[3869]: pam_unix(sshd:session): session closed for user core Jul 14 22:42:53.411776 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:35740.service: Deactivated successfully. Jul 14 22:42:53.412596 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 22:42:53.413540 systemd-logind[1314]: Session 9 logged out. Waiting for processes to exit. Jul 14 22:42:53.414376 systemd-logind[1314]: Removed session 9. Jul 14 22:42:58.412363 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:34564.service. Jul 14 22:42:58.444980 sshd[3884]: Accepted publickey for core from 10.0.0.1 port 34564 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:42:58.446097 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:42:58.449308 systemd-logind[1314]: New session 10 of user core. Jul 14 22:42:58.450177 systemd[1]: Started session-10.scope. Jul 14 22:42:58.550177 sshd[3884]: pam_unix(sshd:session): session closed for user core Jul 14 22:42:58.552426 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:34564.service: Deactivated successfully. Jul 14 22:42:58.553380 systemd-logind[1314]: Session 10 logged out. Waiting for processes to exit. Jul 14 22:42:58.553450 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 22:42:58.554253 systemd-logind[1314]: Removed session 10. Jul 14 22:43:03.553351 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:34580.service. Jul 14 22:43:03.587962 sshd[3901]: Accepted publickey for core from 10.0.0.1 port 34580 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:03.589435 sshd[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:03.593460 systemd-logind[1314]: New session 11 of user core. Jul 14 22:43:03.594162 systemd[1]: Started session-11.scope. Jul 14 22:43:03.702785 sshd[3901]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:03.705551 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:34580.service: Deactivated successfully. Jul 14 22:43:03.706728 systemd-logind[1314]: Session 11 logged out. Waiting for processes to exit. Jul 14 22:43:03.706754 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 22:43:03.707661 systemd-logind[1314]: Removed session 11. Jul 14 22:43:07.529732 kubelet[2204]: E0714 22:43:07.529667 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:08.705967 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:56796.service. Jul 14 22:43:08.738160 sshd[3916]: Accepted publickey for core from 10.0.0.1 port 56796 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:08.739252 sshd[3916]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:08.742604 systemd-logind[1314]: New session 12 of user core. Jul 14 22:43:08.743578 systemd[1]: Started session-12.scope. Jul 14 22:43:08.859820 sshd[3916]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:08.861884 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:56796.service: Deactivated successfully. Jul 14 22:43:08.862979 systemd-logind[1314]: Session 12 logged out. Waiting for processes to exit. Jul 14 22:43:08.863027 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 22:43:08.863997 systemd-logind[1314]: Removed session 12. Jul 14 22:43:10.529420 kubelet[2204]: E0714 22:43:10.529388 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:13.530179 kubelet[2204]: E0714 22:43:13.530135 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:13.863656 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:56812.service. Jul 14 22:43:13.896528 sshd[3931]: Accepted publickey for core from 10.0.0.1 port 56812 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:13.897671 sshd[3931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:13.900750 systemd-logind[1314]: New session 13 of user core. Jul 14 22:43:13.901464 systemd[1]: Started session-13.scope. Jul 14 22:43:14.000706 sshd[3931]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:14.002734 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:56812.service: Deactivated successfully. Jul 14 22:43:14.003674 systemd-logind[1314]: Session 13 logged out. Waiting for processes to exit. Jul 14 22:43:14.003757 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 22:43:14.004478 systemd-logind[1314]: Removed session 13. Jul 14 22:43:19.004724 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:59788.service. Jul 14 22:43:19.036076 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 59788 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:19.053954 sshd[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:19.057326 systemd-logind[1314]: New session 14 of user core. Jul 14 22:43:19.058198 systemd[1]: Started session-14.scope. Jul 14 22:43:19.201663 sshd[3948]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:19.204374 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:59804.service. Jul 14 22:43:19.204813 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:59788.service: Deactivated successfully. Jul 14 22:43:19.205726 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 22:43:19.206090 systemd-logind[1314]: Session 14 logged out. Waiting for processes to exit. Jul 14 22:43:19.206826 systemd-logind[1314]: Removed session 14. Jul 14 22:43:19.240641 sshd[3962]: Accepted publickey for core from 10.0.0.1 port 59804 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:19.241711 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:19.244972 systemd-logind[1314]: New session 15 of user core. Jul 14 22:43:19.245730 systemd[1]: Started session-15.scope. Jul 14 22:43:19.451918 sshd[3962]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:19.454596 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:59818.service. Jul 14 22:43:19.455057 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:59804.service: Deactivated successfully. Jul 14 22:43:19.457512 systemd-logind[1314]: Session 15 logged out. Waiting for processes to exit. Jul 14 22:43:19.457517 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 22:43:19.458579 systemd-logind[1314]: Removed session 15. Jul 14 22:43:19.488846 sshd[3975]: Accepted publickey for core from 10.0.0.1 port 59818 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:19.490442 sshd[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:19.495798 systemd-logind[1314]: New session 16 of user core. Jul 14 22:43:19.496591 systemd[1]: Started session-16.scope. Jul 14 22:43:19.620029 sshd[3975]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:19.622586 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:59818.service: Deactivated successfully. Jul 14 22:43:19.623611 systemd-logind[1314]: Session 16 logged out. Waiting for processes to exit. Jul 14 22:43:19.623634 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 22:43:19.624374 systemd-logind[1314]: Removed session 16. Jul 14 22:43:24.623962 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:59828.service. Jul 14 22:43:24.654978 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 59828 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:24.656526 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:24.659467 systemd-logind[1314]: New session 17 of user core. Jul 14 22:43:24.660156 systemd[1]: Started session-17.scope. Jul 14 22:43:24.799411 sshd[3990]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:24.801426 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:59828.service: Deactivated successfully. Jul 14 22:43:24.802533 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 22:43:24.802616 systemd-logind[1314]: Session 17 logged out. Waiting for processes to exit. Jul 14 22:43:24.803330 systemd-logind[1314]: Removed session 17. Jul 14 22:43:29.529549 kubelet[2204]: E0714 22:43:29.529505 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:29.802531 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:52844.service. Jul 14 22:43:29.837438 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 52844 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:29.838840 sshd[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:29.842784 systemd-logind[1314]: New session 18 of user core. Jul 14 22:43:29.843767 systemd[1]: Started session-18.scope. Jul 14 22:43:29.983585 sshd[4004]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:29.986050 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:52844.service: Deactivated successfully. Jul 14 22:43:29.986792 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 22:43:29.987472 systemd-logind[1314]: Session 18 logged out. Waiting for processes to exit. Jul 14 22:43:29.988066 systemd-logind[1314]: Removed session 18. Jul 14 22:43:34.986824 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:52852.service. Jul 14 22:43:35.025424 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 52852 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:35.026661 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:35.030596 systemd-logind[1314]: New session 19 of user core. Jul 14 22:43:35.031396 systemd[1]: Started session-19.scope. Jul 14 22:43:35.136931 sshd[4022]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:35.139649 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:52854.service. Jul 14 22:43:35.140217 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:52852.service: Deactivated successfully. Jul 14 22:43:35.141665 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 22:43:35.143353 systemd-logind[1314]: Session 19 logged out. Waiting for processes to exit. Jul 14 22:43:35.144498 systemd-logind[1314]: Removed session 19. Jul 14 22:43:35.178342 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 52854 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:35.179630 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:35.183283 systemd-logind[1314]: New session 20 of user core. Jul 14 22:43:35.184276 systemd[1]: Started session-20.scope. Jul 14 22:43:35.706059 sshd[4035]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:35.709367 systemd[1]: Started sshd@20-10.0.0.12:22-10.0.0.1:52866.service. Jul 14 22:43:35.709797 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:52854.service: Deactivated successfully. Jul 14 22:43:35.710809 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 22:43:35.710901 systemd-logind[1314]: Session 20 logged out. Waiting for processes to exit. Jul 14 22:43:35.711869 systemd-logind[1314]: Removed session 20. Jul 14 22:43:35.745089 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 52866 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:35.746243 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:35.750208 systemd-logind[1314]: New session 21 of user core. Jul 14 22:43:35.751100 systemd[1]: Started session-21.scope. Jul 14 22:43:37.530094 kubelet[2204]: E0714 22:43:37.529979 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:40.529852 kubelet[2204]: E0714 22:43:40.529794 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:40.530355 kubelet[2204]: E0714 22:43:40.530134 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:48.974028 sshd[4048]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:48.976783 systemd[1]: sshd@20-10.0.0.12:22-10.0.0.1:52866.service: Deactivated successfully. Jul 14 22:43:48.977706 systemd-logind[1314]: Session 21 logged out. Waiting for processes to exit. Jul 14 22:43:48.977743 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 22:43:48.978536 systemd-logind[1314]: Removed session 21. Jul 14 22:43:48.985254 systemd[1]: Started sshd@21-10.0.0.12:22-10.0.0.1:41072.service. Jul 14 22:43:49.018412 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 41072 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:49.019629 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:49.023317 systemd-logind[1314]: New session 22 of user core. Jul 14 22:43:49.024087 systemd[1]: Started session-22.scope. Jul 14 22:43:49.901354 sshd[4070]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:49.903814 systemd[1]: Started sshd@22-10.0.0.12:22-10.0.0.1:41082.service. Jul 14 22:43:49.904683 systemd[1]: sshd@21-10.0.0.12:22-10.0.0.1:41072.service: Deactivated successfully. Jul 14 22:43:49.905608 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 22:43:49.906166 systemd-logind[1314]: Session 22 logged out. Waiting for processes to exit. Jul 14 22:43:49.906910 systemd-logind[1314]: Removed session 22. Jul 14 22:43:49.937183 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 41082 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:49.938623 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:49.942246 systemd-logind[1314]: New session 23 of user core. Jul 14 22:43:49.942993 systemd[1]: Started session-23.scope. Jul 14 22:43:50.195118 sshd[4080]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:50.197949 systemd[1]: sshd@22-10.0.0.12:22-10.0.0.1:41082.service: Deactivated successfully. Jul 14 22:43:50.198977 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 22:43:50.199082 systemd-logind[1314]: Session 23 logged out. Waiting for processes to exit. Jul 14 22:43:50.199969 systemd-logind[1314]: Removed session 23. Jul 14 22:43:53.529612 kubelet[2204]: E0714 22:43:53.529565 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:43:55.197773 systemd[1]: Started sshd@23-10.0.0.12:22-10.0.0.1:41088.service. Jul 14 22:43:55.230234 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 41088 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:43:55.231150 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:43:55.234722 systemd-logind[1314]: New session 24 of user core. Jul 14 22:43:55.235677 systemd[1]: Started session-24.scope. Jul 14 22:43:55.341140 sshd[4097]: pam_unix(sshd:session): session closed for user core Jul 14 22:43:55.343376 systemd[1]: sshd@23-10.0.0.12:22-10.0.0.1:41088.service: Deactivated successfully. Jul 14 22:43:55.344797 systemd-logind[1314]: Session 24 logged out. Waiting for processes to exit. Jul 14 22:43:55.344960 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 22:43:55.345918 systemd-logind[1314]: Removed session 24. Jul 14 22:44:00.344291 systemd[1]: Started sshd@24-10.0.0.12:22-10.0.0.1:44460.service. Jul 14 22:44:00.378163 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 44460 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:44:00.379480 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:00.383304 systemd-logind[1314]: New session 25 of user core. Jul 14 22:44:00.384022 systemd[1]: Started session-25.scope. Jul 14 22:44:00.490407 sshd[4111]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:00.492384 systemd[1]: sshd@24-10.0.0.12:22-10.0.0.1:44460.service: Deactivated successfully. Jul 14 22:44:00.493295 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 22:44:00.493866 systemd-logind[1314]: Session 25 logged out. Waiting for processes to exit. Jul 14 22:44:00.494515 systemd-logind[1314]: Removed session 25. Jul 14 22:44:05.494660 systemd[1]: Started sshd@25-10.0.0.12:22-10.0.0.1:44476.service. Jul 14 22:44:05.526505 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 44476 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:44:05.527831 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:05.531643 systemd-logind[1314]: New session 26 of user core. Jul 14 22:44:05.532638 systemd[1]: Started session-26.scope. Jul 14 22:44:05.634048 sshd[4127]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:05.636739 systemd[1]: sshd@25-10.0.0.12:22-10.0.0.1:44476.service: Deactivated successfully. Jul 14 22:44:05.637850 systemd-logind[1314]: Session 26 logged out. Waiting for processes to exit. Jul 14 22:44:05.637929 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 22:44:05.638554 systemd-logind[1314]: Removed session 26. Jul 14 22:44:10.638085 systemd[1]: Started sshd@26-10.0.0.12:22-10.0.0.1:39542.service. Jul 14 22:44:10.674556 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:10.679641 systemd[1]: Started session-27.scope. Jul 14 22:44:10.741135 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 39542 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:44:10.679895 systemd-logind[1314]: New session 27 of user core. Jul 14 22:44:10.874682 sshd[4144]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:10.877477 systemd[1]: sshd@26-10.0.0.12:22-10.0.0.1:39542.service: Deactivated successfully. Jul 14 22:44:10.878393 systemd[1]: session-27.scope: Deactivated successfully. Jul 14 22:44:10.879143 systemd-logind[1314]: Session 27 logged out. Waiting for processes to exit. Jul 14 22:44:10.879907 systemd-logind[1314]: Removed session 27. Jul 14 22:44:15.878083 systemd[1]: Started sshd@27-10.0.0.12:22-10.0.0.1:39572.service. Jul 14 22:44:15.911718 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 39572 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:44:15.912827 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:15.916507 systemd-logind[1314]: New session 28 of user core. Jul 14 22:44:15.917390 systemd[1]: Started session-28.scope. Jul 14 22:44:16.163058 sshd[4163]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:16.165090 systemd[1]: sshd@27-10.0.0.12:22-10.0.0.1:39572.service: Deactivated successfully. Jul 14 22:44:16.165831 systemd[1]: session-28.scope: Deactivated successfully. Jul 14 22:44:16.166559 systemd-logind[1314]: Session 28 logged out. Waiting for processes to exit. Jul 14 22:44:16.167212 systemd-logind[1314]: Removed session 28. Jul 14 22:44:21.166418 systemd[1]: Started sshd@28-10.0.0.12:22-10.0.0.1:34038.service. Jul 14 22:44:21.202876 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 34038 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:44:21.204053 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:21.207287 systemd-logind[1314]: New session 29 of user core. Jul 14 22:44:21.208049 systemd[1]: Started session-29.scope. Jul 14 22:44:21.305149 sshd[4180]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:21.307154 systemd[1]: sshd@28-10.0.0.12:22-10.0.0.1:34038.service: Deactivated successfully. Jul 14 22:44:21.308197 systemd-logind[1314]: Session 29 logged out. Waiting for processes to exit. Jul 14 22:44:21.308311 systemd[1]: session-29.scope: Deactivated successfully. Jul 14 22:44:21.309029 systemd-logind[1314]: Removed session 29. Jul 14 22:44:26.308247 systemd[1]: Started sshd@29-10.0.0.12:22-10.0.0.1:34058.service. Jul 14 22:44:26.339537 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 34058 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:44:26.340698 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:26.344150 systemd-logind[1314]: New session 30 of user core. Jul 14 22:44:26.344852 systemd[1]: Started session-30.scope. Jul 14 22:44:26.509385 sshd[4194]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:26.511965 systemd[1]: Started sshd@30-10.0.0.12:22-10.0.0.1:34060.service. Jul 14 22:44:26.512455 systemd[1]: sshd@29-10.0.0.12:22-10.0.0.1:34058.service: Deactivated successfully. Jul 14 22:44:26.513553 systemd[1]: session-30.scope: Deactivated successfully. Jul 14 22:44:26.513818 systemd-logind[1314]: Session 30 logged out. Waiting for processes to exit. Jul 14 22:44:26.514505 systemd-logind[1314]: Removed session 30. Jul 14 22:44:26.543669 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 34060 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:44:26.544570 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:26.547453 systemd-logind[1314]: New session 31 of user core. Jul 14 22:44:26.548083 systemd[1]: Started session-31.scope. Jul 14 22:44:28.631417 env[1334]: time="2025-07-14T22:44:28.631342724Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:44:28.641619 env[1334]: time="2025-07-14T22:44:28.641568831Z" level=info msg="StopContainer for \"5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a\" with timeout 2 (s)" Jul 14 22:44:28.641775 env[1334]: time="2025-07-14T22:44:28.641754392Z" level=info msg="Stop container \"5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a\" with signal terminated" Jul 14 22:44:28.647162 systemd-networkd[1100]: lxc_health: Link DOWN Jul 14 22:44:28.647174 systemd-networkd[1100]: lxc_health: Lost carrier Jul 14 22:44:28.723105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a-rootfs.mount: Deactivated successfully. Jul 14 22:44:28.896204 env[1334]: time="2025-07-14T22:44:28.896047320Z" level=info msg="StopContainer for \"099ddb3cf0d30da2b5742c80099f9a5ca3e8a09607165a663d58e99b826c7f3b\" with timeout 30 (s)" Jul 14 22:44:28.896548 env[1334]: time="2025-07-14T22:44:28.896513106Z" level=info msg="Stop container \"099ddb3cf0d30da2b5742c80099f9a5ca3e8a09607165a663d58e99b826c7f3b\" with signal terminated" Jul 14 22:44:28.918100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-099ddb3cf0d30da2b5742c80099f9a5ca3e8a09607165a663d58e99b826c7f3b-rootfs.mount: Deactivated successfully. Jul 14 22:44:28.964700 env[1334]: time="2025-07-14T22:44:28.964657239Z" level=info msg="shim disconnected" id=099ddb3cf0d30da2b5742c80099f9a5ca3e8a09607165a663d58e99b826c7f3b Jul 14 22:44:28.964700 env[1334]: time="2025-07-14T22:44:28.964698718Z" level=warning msg="cleaning up after shim disconnected" id=099ddb3cf0d30da2b5742c80099f9a5ca3e8a09607165a663d58e99b826c7f3b namespace=k8s.io Jul 14 22:44:28.964979 env[1334]: time="2025-07-14T22:44:28.964710079Z" level=info msg="cleaning up dead shim" Jul 14 22:44:28.964979 env[1334]: time="2025-07-14T22:44:28.964651548Z" level=info msg="shim disconnected" id=5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a Jul 14 22:44:28.964979 env[1334]: time="2025-07-14T22:44:28.964801403Z" level=warning msg="cleaning up after shim disconnected" id=5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a namespace=k8s.io Jul 14 22:44:28.964979 env[1334]: time="2025-07-14T22:44:28.964816872Z" level=info msg="cleaning up dead shim" Jul 14 22:44:28.971713 env[1334]: time="2025-07-14T22:44:28.971662109Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:44:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4281 runtime=io.containerd.runc.v2\n" Jul 14 22:44:28.972605 env[1334]: time="2025-07-14T22:44:28.972566155Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:44:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4282 runtime=io.containerd.runc.v2\n" Jul 14 22:44:29.060034 env[1334]: time="2025-07-14T22:44:29.059958715Z" level=info msg="StopContainer for \"099ddb3cf0d30da2b5742c80099f9a5ca3e8a09607165a663d58e99b826c7f3b\" returns successfully" Jul 14 22:44:29.060506 env[1334]: time="2025-07-14T22:44:29.060483963Z" level=info msg="StopPodSandbox for \"9a238712683410753122aa96714c59362aa58823c4eac66655fed8f4ae856226\"" Jul 14 22:44:29.060572 env[1334]: time="2025-07-14T22:44:29.060535050Z" level=info msg="Container to stop \"099ddb3cf0d30da2b5742c80099f9a5ca3e8a09607165a663d58e99b826c7f3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:44:29.063272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a238712683410753122aa96714c59362aa58823c4eac66655fed8f4ae856226-shm.mount: Deactivated successfully. Jul 14 22:44:29.079859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a238712683410753122aa96714c59362aa58823c4eac66655fed8f4ae856226-rootfs.mount: Deactivated successfully. Jul 14 22:44:29.148051 env[1334]: time="2025-07-14T22:44:29.147895034Z" level=info msg="StopContainer for \"5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a\" returns successfully" Jul 14 22:44:29.148570 env[1334]: time="2025-07-14T22:44:29.148533776Z" level=info msg="StopPodSandbox for \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\"" Jul 14 22:44:29.148645 env[1334]: time="2025-07-14T22:44:29.148616804Z" level=info msg="Container to stop \"e086f4df9b09ad36f3d4fed3b271368f794af4e23e19d0c1997fe46d9644957e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:44:29.148688 env[1334]: time="2025-07-14T22:44:29.148646580Z" level=info msg="Container to stop \"3dc1b8fe1051ca2c724be2cc7edba074d8c8e6434e6decc7904c774c7829694d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:44:29.148688 env[1334]: time="2025-07-14T22:44:29.148663192Z" level=info msg="Container to stop \"69c722608797d9d6122f410883b2f39840826908fa52bfd4810c766a00149771\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:44:29.148688 env[1334]: time="2025-07-14T22:44:29.148680214Z" level=info msg="Container to stop \"71dc22aab594a6691e104a0de964e58c80e7f8a15a012fb68737416940bd2120\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:44:29.148817 env[1334]: time="2025-07-14T22:44:29.148693680Z" level=info msg="Container to stop \"5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:44:29.223075 env[1334]: time="2025-07-14T22:44:29.222980844Z" level=info msg="shim disconnected" id=9a238712683410753122aa96714c59362aa58823c4eac66655fed8f4ae856226 Jul 14 22:44:29.223075 env[1334]: time="2025-07-14T22:44:29.223059103Z" level=warning msg="cleaning up after shim disconnected" id=9a238712683410753122aa96714c59362aa58823c4eac66655fed8f4ae856226 namespace=k8s.io Jul 14 22:44:29.223075 env[1334]: time="2025-07-14T22:44:29.223073831Z" level=info msg="cleaning up dead shim" Jul 14 22:44:29.230222 env[1334]: time="2025-07-14T22:44:29.230151197Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:44:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4349 runtime=io.containerd.runc.v2\n" Jul 14 22:44:29.230601 env[1334]: time="2025-07-14T22:44:29.230555595Z" level=info msg="TearDown network for sandbox \"9a238712683410753122aa96714c59362aa58823c4eac66655fed8f4ae856226\" successfully" Jul 14 22:44:29.230601 env[1334]: time="2025-07-14T22:44:29.230586033Z" level=info msg="StopPodSandbox for \"9a238712683410753122aa96714c59362aa58823c4eac66655fed8f4ae856226\" returns successfully" Jul 14 22:44:29.303010 env[1334]: time="2025-07-14T22:44:29.302947792Z" level=info msg="shim disconnected" id=773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58 Jul 14 22:44:29.303010 env[1334]: time="2025-07-14T22:44:29.302994731Z" level=warning msg="cleaning up after shim disconnected" id=773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58 namespace=k8s.io Jul 14 22:44:29.303010 env[1334]: time="2025-07-14T22:44:29.303003898Z" level=info msg="cleaning up dead shim" Jul 14 22:44:29.309101 env[1334]: time="2025-07-14T22:44:29.309063092Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:44:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4361 runtime=io.containerd.runc.v2\n" Jul 14 22:44:29.309453 env[1334]: time="2025-07-14T22:44:29.309424698Z" level=info msg="TearDown network for sandbox \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\" successfully" Jul 14 22:44:29.309514 env[1334]: time="2025-07-14T22:44:29.309453623Z" level=info msg="StopPodSandbox for \"773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58\" returns successfully" Jul 14 22:44:29.329108 kubelet[2204]: I0714 22:44:29.329068 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae24e431-76f5-47d8-a8b3-5db74da44c76-cilium-config-path\") pod \"ae24e431-76f5-47d8-a8b3-5db74da44c76\" (UID: \"ae24e431-76f5-47d8-a8b3-5db74da44c76\") " Jul 14 22:44:29.329448 kubelet[2204]: I0714 22:44:29.329132 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pxvl\" (UniqueName: \"kubernetes.io/projected/ae24e431-76f5-47d8-a8b3-5db74da44c76-kube-api-access-9pxvl\") pod \"ae24e431-76f5-47d8-a8b3-5db74da44c76\" (UID: \"ae24e431-76f5-47d8-a8b3-5db74da44c76\") " Jul 14 22:44:29.330901 kubelet[2204]: I0714 22:44:29.330863 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae24e431-76f5-47d8-a8b3-5db74da44c76-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae24e431-76f5-47d8-a8b3-5db74da44c76" (UID: "ae24e431-76f5-47d8-a8b3-5db74da44c76"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 22:44:29.331732 kubelet[2204]: I0714 22:44:29.331708 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae24e431-76f5-47d8-a8b3-5db74da44c76-kube-api-access-9pxvl" (OuterVolumeSpecName: "kube-api-access-9pxvl") pod "ae24e431-76f5-47d8-a8b3-5db74da44c76" (UID: "ae24e431-76f5-47d8-a8b3-5db74da44c76"). InnerVolumeSpecName "kube-api-access-9pxvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:44:29.429978 kubelet[2204]: I0714 22:44:29.429816 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-host-proc-sys-kernel\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.429978 kubelet[2204]: I0714 22:44:29.429893 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwd9h\" (UniqueName: \"kubernetes.io/projected/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-kube-api-access-rwd9h\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.429978 kubelet[2204]: I0714 22:44:29.429915 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-bpf-maps\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.429978 kubelet[2204]: I0714 22:44:29.429935 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cni-path\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.429978 kubelet[2204]: I0714 22:44:29.429971 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-clustermesh-secrets\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.430431 kubelet[2204]: I0714 22:44:29.430000 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cilium-config-path\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.430431 kubelet[2204]: I0714 22:44:29.430019 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-xtables-lock\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.430431 kubelet[2204]: I0714 22:44:29.430036 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cilium-cgroup\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.430431 kubelet[2204]: I0714 22:44:29.430057 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-lib-modules\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.430431 kubelet[2204]: I0714 22:44:29.430078 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-hubble-tls\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.430431 kubelet[2204]: I0714 22:44:29.430096 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-hostproc\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.430714 kubelet[2204]: I0714 22:44:29.430113 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-etc-cni-netd\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.430714 kubelet[2204]: I0714 22:44:29.430133 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cilium-run\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.430714 kubelet[2204]: I0714 22:44:29.430151 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-host-proc-sys-net\") pod \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\" (UID: \"ce4ac92b-addb-4d36-ace7-9a52e3bf725e\") " Jul 14 22:44:29.430714 kubelet[2204]: I0714 22:44:29.429939 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:29.430714 kubelet[2204]: I0714 22:44:29.430191 2204 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pxvl\" (UniqueName: \"kubernetes.io/projected/ae24e431-76f5-47d8-a8b3-5db74da44c76-kube-api-access-9pxvl\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.430714 kubelet[2204]: I0714 22:44:29.430077 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cni-path" (OuterVolumeSpecName: "cni-path") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:29.430998 kubelet[2204]: I0714 22:44:29.430209 2204 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae24e431-76f5-47d8-a8b3-5db74da44c76-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.430998 kubelet[2204]: I0714 22:44:29.430111 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:29.430998 kubelet[2204]: I0714 22:44:29.430246 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:29.430998 kubelet[2204]: I0714 22:44:29.430165 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:29.430998 kubelet[2204]: I0714 22:44:29.430318 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:29.431220 kubelet[2204]: I0714 22:44:29.430339 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:29.431220 kubelet[2204]: I0714 22:44:29.431012 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-hostproc" (OuterVolumeSpecName: "hostproc") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:29.431220 kubelet[2204]: I0714 22:44:29.431043 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:29.431220 kubelet[2204]: I0714 22:44:29.431062 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:29.433669 kubelet[2204]: I0714 22:44:29.433631 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 22:44:29.433903 kubelet[2204]: I0714 22:44:29.433873 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 22:44:29.434704 kubelet[2204]: I0714 22:44:29.434673 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:44:29.435157 kubelet[2204]: I0714 22:44:29.435125 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-kube-api-access-rwd9h" (OuterVolumeSpecName: "kube-api-access-rwd9h") pod "ce4ac92b-addb-4d36-ace7-9a52e3bf725e" (UID: "ce4ac92b-addb-4d36-ace7-9a52e3bf725e"). InnerVolumeSpecName "kube-api-access-rwd9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:44:29.530621 kubelet[2204]: I0714 22:44:29.530595 2204 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwd9h\" (UniqueName: \"kubernetes.io/projected/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-kube-api-access-rwd9h\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.530799 kubelet[2204]: I0714 22:44:29.530783 2204 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.530886 kubelet[2204]: I0714 22:44:29.530872 2204 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.530986 kubelet[2204]: I0714 22:44:29.530972 2204 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.531068 kubelet[2204]: I0714 22:44:29.531053 2204 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.531144 kubelet[2204]: I0714 22:44:29.531130 2204 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.531226 kubelet[2204]: I0714 22:44:29.531212 2204 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.531349 kubelet[2204]: I0714 22:44:29.531334 2204 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.531425 kubelet[2204]: I0714 22:44:29.531411 2204 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.531508 kubelet[2204]: I0714 22:44:29.531494 2204 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.531589 kubelet[2204]: I0714 22:44:29.531574 2204 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.531678 kubelet[2204]: I0714 22:44:29.531664 2204 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.531761 kubelet[2204]: I0714 22:44:29.531746 2204 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.531857 kubelet[2204]: I0714 22:44:29.531842 2204 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce4ac92b-addb-4d36-ace7-9a52e3bf725e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:29.534631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58-rootfs.mount: Deactivated successfully. Jul 14 22:44:29.534774 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-773024c98bd76f303ca0e27e477caccd37da72076ee387d6fb3ca52a55489c58-shm.mount: Deactivated successfully. Jul 14 22:44:29.534876 systemd[1]: var-lib-kubelet-pods-ce4ac92b\x2daddb\x2d4d36\x2dace7\x2d9a52e3bf725e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drwd9h.mount: Deactivated successfully. Jul 14 22:44:29.534991 systemd[1]: var-lib-kubelet-pods-ce4ac92b\x2daddb\x2d4d36\x2dace7\x2d9a52e3bf725e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 22:44:29.535081 systemd[1]: var-lib-kubelet-pods-ce4ac92b\x2daddb\x2d4d36\x2dace7\x2d9a52e3bf725e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 22:44:29.535171 systemd[1]: var-lib-kubelet-pods-ae24e431\x2d76f5\x2d47d8\x2da8b3\x2d5db74da44c76-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9pxvl.mount: Deactivated successfully. Jul 14 22:44:30.201633 kubelet[2204]: I0714 22:44:30.201605 2204 scope.go:117] "RemoveContainer" containerID="099ddb3cf0d30da2b5742c80099f9a5ca3e8a09607165a663d58e99b826c7f3b" Jul 14 22:44:30.203527 env[1334]: time="2025-07-14T22:44:30.203478762Z" level=info msg="RemoveContainer for \"099ddb3cf0d30da2b5742c80099f9a5ca3e8a09607165a663d58e99b826c7f3b\"" Jul 14 22:44:30.203980 sshd[4206]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:30.207200 systemd[1]: Started sshd@31-10.0.0.12:22-10.0.0.1:49664.service. Jul 14 22:44:30.207759 systemd[1]: sshd@30-10.0.0.12:22-10.0.0.1:34060.service: Deactivated successfully. Jul 14 22:44:30.210250 systemd[1]: session-31.scope: Deactivated successfully. Jul 14 22:44:30.211703 systemd-logind[1314]: Session 31 logged out. Waiting for processes to exit. Jul 14 22:44:30.213093 systemd-logind[1314]: Removed session 31. Jul 14 22:44:30.239358 sshd[4379]: Accepted publickey for core from 10.0.0.1 port 49664 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:44:30.240376 sshd[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:30.244071 systemd-logind[1314]: New session 32 of user core. Jul 14 22:44:30.244959 systemd[1]: Started session-32.scope. Jul 14 22:44:30.270515 env[1334]: time="2025-07-14T22:44:30.270463162Z" level=info msg="RemoveContainer for \"099ddb3cf0d30da2b5742c80099f9a5ca3e8a09607165a663d58e99b826c7f3b\" returns successfully" Jul 14 22:44:30.270848 kubelet[2204]: I0714 22:44:30.270785 2204 scope.go:117] "RemoveContainer" containerID="5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a" Jul 14 22:44:30.272015 env[1334]: time="2025-07-14T22:44:30.271984960Z" level=info msg="RemoveContainer for \"5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a\"" Jul 14 22:44:30.424638 env[1334]: time="2025-07-14T22:44:30.424575368Z" level=info msg="RemoveContainer for \"5b28349ad3c39273848a5e45682c43fe47191594fea3dd97f8842174f15e8d0a\" returns successfully" Jul 14 22:44:30.424942 kubelet[2204]: I0714 22:44:30.424915 2204 scope.go:117] "RemoveContainer" containerID="71dc22aab594a6691e104a0de964e58c80e7f8a15a012fb68737416940bd2120" Jul 14 22:44:30.426450 env[1334]: time="2025-07-14T22:44:30.426377559Z" level=info msg="RemoveContainer for \"71dc22aab594a6691e104a0de964e58c80e7f8a15a012fb68737416940bd2120\"" Jul 14 22:44:30.491008 env[1334]: time="2025-07-14T22:44:30.490884476Z" level=info msg="RemoveContainer for \"71dc22aab594a6691e104a0de964e58c80e7f8a15a012fb68737416940bd2120\" returns successfully" Jul 14 22:44:30.491197 kubelet[2204]: I0714 22:44:30.491171 2204 scope.go:117] "RemoveContainer" containerID="69c722608797d9d6122f410883b2f39840826908fa52bfd4810c766a00149771" Jul 14 22:44:30.492383 env[1334]: time="2025-07-14T22:44:30.492313409Z" level=info msg="RemoveContainer for \"69c722608797d9d6122f410883b2f39840826908fa52bfd4810c766a00149771\"" Jul 14 22:44:30.516479 env[1334]: time="2025-07-14T22:44:30.516415533Z" level=info msg="RemoveContainer for \"69c722608797d9d6122f410883b2f39840826908fa52bfd4810c766a00149771\" returns successfully" Jul 14 22:44:30.516729 kubelet[2204]: I0714 22:44:30.516700 2204 scope.go:117] "RemoveContainer" containerID="3dc1b8fe1051ca2c724be2cc7edba074d8c8e6434e6decc7904c774c7829694d" Jul 14 22:44:30.517686 env[1334]: time="2025-07-14T22:44:30.517662098Z" level=info msg="RemoveContainer for \"3dc1b8fe1051ca2c724be2cc7edba074d8c8e6434e6decc7904c774c7829694d\"" Jul 14 22:44:30.549953 env[1334]: time="2025-07-14T22:44:30.549900508Z" level=info msg="RemoveContainer for \"3dc1b8fe1051ca2c724be2cc7edba074d8c8e6434e6decc7904c774c7829694d\" returns successfully" Jul 14 22:44:30.550198 kubelet[2204]: I0714 22:44:30.550156 2204 scope.go:117] "RemoveContainer" containerID="e086f4df9b09ad36f3d4fed3b271368f794af4e23e19d0c1997fe46d9644957e" Jul 14 22:44:30.551006 env[1334]: time="2025-07-14T22:44:30.550987021Z" level=info msg="RemoveContainer for \"e086f4df9b09ad36f3d4fed3b271368f794af4e23e19d0c1997fe46d9644957e\"" Jul 14 22:44:30.565183 env[1334]: time="2025-07-14T22:44:30.565157732Z" level=info msg="RemoveContainer for \"e086f4df9b09ad36f3d4fed3b271368f794af4e23e19d0c1997fe46d9644957e\" returns successfully" Jul 14 22:44:31.426136 systemd[1]: Started sshd@32-10.0.0.12:22-10.0.0.1:49680.service. Jul 14 22:44:31.426650 sshd[4379]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:31.429451 systemd-logind[1314]: Session 32 logged out. Waiting for processes to exit. Jul 14 22:44:31.430030 systemd[1]: sshd@31-10.0.0.12:22-10.0.0.1:49664.service: Deactivated successfully. Jul 14 22:44:31.435915 systemd[1]: session-32.scope: Deactivated successfully. Jul 14 22:44:31.438628 kubelet[2204]: E0714 22:44:31.438593 2204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce4ac92b-addb-4d36-ace7-9a52e3bf725e" containerName="mount-cgroup" Jul 14 22:44:31.438628 kubelet[2204]: E0714 22:44:31.438618 2204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce4ac92b-addb-4d36-ace7-9a52e3bf725e" containerName="cilium-agent" Jul 14 22:44:31.438628 kubelet[2204]: E0714 22:44:31.438625 2204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce4ac92b-addb-4d36-ace7-9a52e3bf725e" containerName="apply-sysctl-overwrites" Jul 14 22:44:31.438628 kubelet[2204]: E0714 22:44:31.438632 2204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae24e431-76f5-47d8-a8b3-5db74da44c76" containerName="cilium-operator" Jul 14 22:44:31.438628 kubelet[2204]: E0714 22:44:31.438637 2204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce4ac92b-addb-4d36-ace7-9a52e3bf725e" containerName="mount-bpf-fs" Jul 14 22:44:31.438628 kubelet[2204]: E0714 22:44:31.438641 2204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce4ac92b-addb-4d36-ace7-9a52e3bf725e" containerName="clean-cilium-state" Jul 14 22:44:31.438628 kubelet[2204]: I0714 22:44:31.438663 2204 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae24e431-76f5-47d8-a8b3-5db74da44c76" containerName="cilium-operator" Jul 14 22:44:31.438628 kubelet[2204]: I0714 22:44:31.438668 2204 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce4ac92b-addb-4d36-ace7-9a52e3bf725e" containerName="cilium-agent" Jul 14 22:44:31.442321 systemd-logind[1314]: Removed session 32. Jul 14 22:44:31.469643 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 49680 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:44:31.471390 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:31.476235 systemd-logind[1314]: New session 33 of user core. Jul 14 22:44:31.476653 systemd[1]: Started session-33.scope. Jul 14 22:44:31.531689 kubelet[2204]: I0714 22:44:31.531640 2204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae24e431-76f5-47d8-a8b3-5db74da44c76" path="/var/lib/kubelet/pods/ae24e431-76f5-47d8-a8b3-5db74da44c76/volumes" Jul 14 22:44:31.532038 kubelet[2204]: I0714 22:44:31.532008 2204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce4ac92b-addb-4d36-ace7-9a52e3bf725e" path="/var/lib/kubelet/pods/ce4ac92b-addb-4d36-ace7-9a52e3bf725e/volumes" Jul 14 22:44:31.543724 kubelet[2204]: I0714 22:44:31.543661 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-hostproc\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.543724 kubelet[2204]: I0714 22:44:31.543717 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-config-path\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.543931 kubelet[2204]: I0714 22:44:31.543741 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-host-proc-sys-kernel\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.543931 kubelet[2204]: I0714 22:44:31.543757 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-run\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.543931 kubelet[2204]: I0714 22:44:31.543771 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-lib-modules\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.543931 kubelet[2204]: I0714 22:44:31.543783 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-host-proc-sys-net\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.543931 kubelet[2204]: I0714 22:44:31.543795 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cni-path\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.543931 kubelet[2204]: I0714 22:44:31.543807 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-etc-cni-netd\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.544071 kubelet[2204]: I0714 22:44:31.543834 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-bpf-maps\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.544071 kubelet[2204]: I0714 22:44:31.543851 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-cgroup\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.544071 kubelet[2204]: I0714 22:44:31.543870 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-xtables-lock\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.544071 kubelet[2204]: I0714 22:44:31.543883 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1219ba0c-e051-4a12-a572-a6fa9e5dac02-clustermesh-secrets\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.544071 kubelet[2204]: I0714 22:44:31.543896 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-ipsec-secrets\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.544071 kubelet[2204]: I0714 22:44:31.543909 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1219ba0c-e051-4a12-a572-a6fa9e5dac02-hubble-tls\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.544205 kubelet[2204]: I0714 22:44:31.543921 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9d9\" (UniqueName: \"kubernetes.io/projected/1219ba0c-e051-4a12-a572-a6fa9e5dac02-kube-api-access-4f9d9\") pod \"cilium-xvw7p\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " pod="kube-system/cilium-xvw7p" Jul 14 22:44:31.597198 sshd[4391]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:31.602841 systemd[1]: Started sshd@33-10.0.0.12:22-10.0.0.1:49696.service. Jul 14 22:44:31.607741 systemd[1]: sshd@32-10.0.0.12:22-10.0.0.1:49680.service: Deactivated successfully. Jul 14 22:44:31.608586 systemd[1]: session-33.scope: Deactivated successfully. Jul 14 22:44:31.609731 kubelet[2204]: E0714 22:44:31.609595 2204 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-4f9d9 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-xvw7p" podUID="1219ba0c-e051-4a12-a572-a6fa9e5dac02" Jul 14 22:44:31.611414 systemd-logind[1314]: Session 33 logged out. Waiting for processes to exit. Jul 14 22:44:31.612669 systemd-logind[1314]: Removed session 33. Jul 14 22:44:31.614464 kubelet[2204]: E0714 22:44:31.614433 2204 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 22:44:31.637393 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 49696 ssh2: RSA SHA256:9J5UK/+PqU7n1wZmSgzLbm/e/olRUtYYL5T3eqkzK4I Jul 14 22:44:31.638973 sshd[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:44:31.643359 systemd-logind[1314]: New session 34 of user core. Jul 14 22:44:31.643803 systemd[1]: Started session-34.scope. Jul 14 22:44:32.247205 kubelet[2204]: I0714 22:44:32.247152 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f9d9\" (UniqueName: \"kubernetes.io/projected/1219ba0c-e051-4a12-a572-a6fa9e5dac02-kube-api-access-4f9d9\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247205 kubelet[2204]: I0714 22:44:32.247189 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-config-path\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247205 kubelet[2204]: I0714 22:44:32.247203 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-bpf-maps\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247205 kubelet[2204]: I0714 22:44:32.247215 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-lib-modules\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247501 kubelet[2204]: I0714 22:44:32.247229 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cni-path\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247501 kubelet[2204]: I0714 22:44:32.247243 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1219ba0c-e051-4a12-a572-a6fa9e5dac02-hubble-tls\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247501 kubelet[2204]: I0714 22:44:32.247255 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-host-proc-sys-net\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247501 kubelet[2204]: I0714 22:44:32.247306 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1219ba0c-e051-4a12-a572-a6fa9e5dac02-clustermesh-secrets\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247501 kubelet[2204]: I0714 22:44:32.247307 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:32.247501 kubelet[2204]: I0714 22:44:32.247319 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-hostproc\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247700 kubelet[2204]: I0714 22:44:32.247335 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-cgroup\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247700 kubelet[2204]: I0714 22:44:32.247348 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-host-proc-sys-kernel\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247700 kubelet[2204]: I0714 22:44:32.247360 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-xtables-lock\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247700 kubelet[2204]: I0714 22:44:32.247372 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-ipsec-secrets\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247700 kubelet[2204]: I0714 22:44:32.247383 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-run\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247700 kubelet[2204]: I0714 22:44:32.247397 2204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-etc-cni-netd\") pod \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\" (UID: \"1219ba0c-e051-4a12-a572-a6fa9e5dac02\") " Jul 14 22:44:32.247904 kubelet[2204]: I0714 22:44:32.247423 2204 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.247904 kubelet[2204]: I0714 22:44:32.247313 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:32.247904 kubelet[2204]: I0714 22:44:32.247337 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cni-path" (OuterVolumeSpecName: "cni-path") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:32.247904 kubelet[2204]: I0714 22:44:32.247352 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-hostproc" (OuterVolumeSpecName: "hostproc") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:32.247904 kubelet[2204]: I0714 22:44:32.247360 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:32.248114 kubelet[2204]: I0714 22:44:32.247443 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:32.248114 kubelet[2204]: I0714 22:44:32.247472 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:32.248114 kubelet[2204]: I0714 22:44:32.247484 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:32.248114 kubelet[2204]: I0714 22:44:32.247495 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:32.249064 kubelet[2204]: I0714 22:44:32.249037 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 22:44:32.249982 kubelet[2204]: I0714 22:44:32.249955 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1219ba0c-e051-4a12-a572-a6fa9e5dac02-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:44:32.250048 kubelet[2204]: I0714 22:44:32.250002 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 14 22:44:32.250289 kubelet[2204]: I0714 22:44:32.250187 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1219ba0c-e051-4a12-a572-a6fa9e5dac02-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 22:44:32.250342 kubelet[2204]: I0714 22:44:32.250327 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 22:44:32.251621 systemd[1]: var-lib-kubelet-pods-1219ba0c\x2de051\x2d4a12\x2da572\x2da6fa9e5dac02-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 22:44:32.251756 systemd[1]: var-lib-kubelet-pods-1219ba0c\x2de051\x2d4a12\x2da572\x2da6fa9e5dac02-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 22:44:32.251861 systemd[1]: var-lib-kubelet-pods-1219ba0c\x2de051\x2d4a12\x2da572\x2da6fa9e5dac02-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 14 22:44:32.253554 kubelet[2204]: I0714 22:44:32.253525 2204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1219ba0c-e051-4a12-a572-a6fa9e5dac02-kube-api-access-4f9d9" (OuterVolumeSpecName: "kube-api-access-4f9d9") pod "1219ba0c-e051-4a12-a572-a6fa9e5dac02" (UID: "1219ba0c-e051-4a12-a572-a6fa9e5dac02"). InnerVolumeSpecName "kube-api-access-4f9d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:44:32.254665 systemd[1]: var-lib-kubelet-pods-1219ba0c\x2de051\x2d4a12\x2da572\x2da6fa9e5dac02-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4f9d9.mount: Deactivated successfully. Jul 14 22:44:32.347943 kubelet[2204]: I0714 22:44:32.347888 2204 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1219ba0c-e051-4a12-a572-a6fa9e5dac02-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.347943 kubelet[2204]: I0714 22:44:32.347920 2204 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.347943 kubelet[2204]: I0714 22:44:32.347933 2204 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.347943 kubelet[2204]: I0714 22:44:32.347950 2204 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1219ba0c-e051-4a12-a572-a6fa9e5dac02-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.348177 kubelet[2204]: I0714 22:44:32.347964 2204 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.348177 kubelet[2204]: I0714 22:44:32.347973 2204 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.348177 kubelet[2204]: I0714 22:44:32.347984 2204 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.348177 kubelet[2204]: I0714 22:44:32.347993 2204 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.348177 kubelet[2204]: I0714 22:44:32.348002 2204 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.348177 kubelet[2204]: I0714 22:44:32.348011 2204 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.348177 kubelet[2204]: I0714 22:44:32.348019 2204 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.348177 kubelet[2204]: I0714 22:44:32.348027 2204 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1219ba0c-e051-4a12-a572-a6fa9e5dac02-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.348408 kubelet[2204]: I0714 22:44:32.348036 2204 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f9d9\" (UniqueName: \"kubernetes.io/projected/1219ba0c-e051-4a12-a572-a6fa9e5dac02-kube-api-access-4f9d9\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.348408 kubelet[2204]: I0714 22:44:32.348045 2204 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1219ba0c-e051-4a12-a572-a6fa9e5dac02-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 22:44:32.529531 kubelet[2204]: E0714 22:44:32.529373 2204 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-hjwh2" podUID="55cb3708-b450-4674-a8b0-9b632af58c9f" Jul 14 22:44:33.353921 kubelet[2204]: I0714 22:44:33.353869 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d00b671-76af-4af2-8a3a-306447b9c5a0-host-proc-sys-net\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.353921 kubelet[2204]: I0714 22:44:33.353905 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d00b671-76af-4af2-8a3a-306447b9c5a0-etc-cni-netd\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.353921 kubelet[2204]: I0714 22:44:33.353925 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d00b671-76af-4af2-8a3a-306447b9c5a0-cilium-config-path\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.353921 kubelet[2204]: I0714 22:44:33.353938 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3d00b671-76af-4af2-8a3a-306447b9c5a0-cilium-ipsec-secrets\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.354157 kubelet[2204]: I0714 22:44:33.353984 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d00b671-76af-4af2-8a3a-306447b9c5a0-clustermesh-secrets\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.354157 kubelet[2204]: I0714 22:44:33.353999 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d00b671-76af-4af2-8a3a-306447b9c5a0-xtables-lock\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.354157 kubelet[2204]: I0714 22:44:33.354012 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgfv2\" (UniqueName: \"kubernetes.io/projected/3d00b671-76af-4af2-8a3a-306447b9c5a0-kube-api-access-sgfv2\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.354157 kubelet[2204]: I0714 22:44:33.354027 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d00b671-76af-4af2-8a3a-306447b9c5a0-cilium-run\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.354157 kubelet[2204]: I0714 22:44:33.354039 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d00b671-76af-4af2-8a3a-306447b9c5a0-lib-modules\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.354157 kubelet[2204]: I0714 22:44:33.354074 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d00b671-76af-4af2-8a3a-306447b9c5a0-hostproc\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.354366 kubelet[2204]: I0714 22:44:33.354087 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d00b671-76af-4af2-8a3a-306447b9c5a0-cilium-cgroup\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.354366 kubelet[2204]: I0714 22:44:33.354099 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d00b671-76af-4af2-8a3a-306447b9c5a0-cni-path\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.354366 kubelet[2204]: I0714 22:44:33.354112 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d00b671-76af-4af2-8a3a-306447b9c5a0-host-proc-sys-kernel\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.354366 kubelet[2204]: I0714 22:44:33.354125 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d00b671-76af-4af2-8a3a-306447b9c5a0-bpf-maps\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.354366 kubelet[2204]: I0714 22:44:33.354158 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d00b671-76af-4af2-8a3a-306447b9c5a0-hubble-tls\") pod \"cilium-22gvb\" (UID: \"3d00b671-76af-4af2-8a3a-306447b9c5a0\") " pod="kube-system/cilium-22gvb" Jul 14 22:44:33.531692 kubelet[2204]: I0714 22:44:33.531645 2204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1219ba0c-e051-4a12-a572-a6fa9e5dac02" path="/var/lib/kubelet/pods/1219ba0c-e051-4a12-a572-a6fa9e5dac02/volumes" Jul 14 22:44:33.563358 kubelet[2204]: E0714 22:44:33.563250 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:33.563965 env[1334]: time="2025-07-14T22:44:33.563903564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22gvb,Uid:3d00b671-76af-4af2-8a3a-306447b9c5a0,Namespace:kube-system,Attempt:0,}" Jul 14 22:44:33.578477 env[1334]: time="2025-07-14T22:44:33.578396801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:44:33.578477 env[1334]: time="2025-07-14T22:44:33.578443439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:44:33.578477 env[1334]: time="2025-07-14T22:44:33.578454340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:44:33.578690 env[1334]: time="2025-07-14T22:44:33.578616498Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5abc934cf3bcc4bbff32bcfcf4e76cdaea08330492a7a0c2f92fa207408f49b8 pid=4436 runtime=io.containerd.runc.v2 Jul 14 22:44:33.613880 env[1334]: time="2025-07-14T22:44:33.613026733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22gvb,Uid:3d00b671-76af-4af2-8a3a-306447b9c5a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5abc934cf3bcc4bbff32bcfcf4e76cdaea08330492a7a0c2f92fa207408f49b8\"" Jul 14 22:44:33.614084 kubelet[2204]: E0714 22:44:33.613686 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:33.615680 env[1334]: time="2025-07-14T22:44:33.615614364Z" level=info msg="CreateContainer within sandbox \"5abc934cf3bcc4bbff32bcfcf4e76cdaea08330492a7a0c2f92fa207408f49b8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 22:44:33.885911 env[1334]: time="2025-07-14T22:44:33.885458551Z" level=info msg="CreateContainer within sandbox \"5abc934cf3bcc4bbff32bcfcf4e76cdaea08330492a7a0c2f92fa207408f49b8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7227e43a1622d1cc71fb00c20367565f1ed641cf5d4b32250aab02c046e3579b\"" Jul 14 22:44:33.886840 env[1334]: time="2025-07-14T22:44:33.886778426Z" level=info msg="StartContainer for \"7227e43a1622d1cc71fb00c20367565f1ed641cf5d4b32250aab02c046e3579b\"" Jul 14 22:44:33.935054 env[1334]: time="2025-07-14T22:44:33.935008008Z" level=info msg="StartContainer for \"7227e43a1622d1cc71fb00c20367565f1ed641cf5d4b32250aab02c046e3579b\" returns successfully" Jul 14 22:44:33.967662 env[1334]: time="2025-07-14T22:44:33.967603721Z" level=info msg="shim disconnected" id=7227e43a1622d1cc71fb00c20367565f1ed641cf5d4b32250aab02c046e3579b Jul 14 22:44:33.967662 env[1334]: time="2025-07-14T22:44:33.967658304Z" level=warning msg="cleaning up after shim disconnected" id=7227e43a1622d1cc71fb00c20367565f1ed641cf5d4b32250aab02c046e3579b namespace=k8s.io Jul 14 22:44:33.967662 env[1334]: time="2025-07-14T22:44:33.967672441Z" level=info msg="cleaning up dead shim" Jul 14 22:44:33.974162 env[1334]: time="2025-07-14T22:44:33.974109067Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:44:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4521 runtime=io.containerd.runc.v2\n" Jul 14 22:44:34.218422 kubelet[2204]: E0714 22:44:34.218374 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:34.220913 env[1334]: time="2025-07-14T22:44:34.220826259Z" level=info msg="CreateContainer within sandbox \"5abc934cf3bcc4bbff32bcfcf4e76cdaea08330492a7a0c2f92fa207408f49b8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 22:44:34.235646 env[1334]: time="2025-07-14T22:44:34.235555991Z" level=info msg="CreateContainer within sandbox \"5abc934cf3bcc4bbff32bcfcf4e76cdaea08330492a7a0c2f92fa207408f49b8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"341a0c73a5c8298fe7acd60fa56e05033e9d330f17092084ff24d97c277372a8\"" Jul 14 22:44:34.236802 env[1334]: time="2025-07-14T22:44:34.236744417Z" level=info msg="StartContainer for \"341a0c73a5c8298fe7acd60fa56e05033e9d330f17092084ff24d97c277372a8\"" Jul 14 22:44:34.278920 env[1334]: time="2025-07-14T22:44:34.278861112Z" level=info msg="StartContainer for \"341a0c73a5c8298fe7acd60fa56e05033e9d330f17092084ff24d97c277372a8\" returns successfully" Jul 14 22:44:34.303849 env[1334]: time="2025-07-14T22:44:34.303782135Z" level=info msg="shim disconnected" id=341a0c73a5c8298fe7acd60fa56e05033e9d330f17092084ff24d97c277372a8 Jul 14 22:44:34.303849 env[1334]: time="2025-07-14T22:44:34.303846016Z" level=warning msg="cleaning up after shim disconnected" id=341a0c73a5c8298fe7acd60fa56e05033e9d330f17092084ff24d97c277372a8 namespace=k8s.io Jul 14 22:44:34.303849 env[1334]: time="2025-07-14T22:44:34.303856606Z" level=info msg="cleaning up dead shim" Jul 14 22:44:34.310379 env[1334]: time="2025-07-14T22:44:34.310331022Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:44:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4580 runtime=io.containerd.runc.v2\n" Jul 14 22:44:34.530128 kubelet[2204]: E0714 22:44:34.529943 2204 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-hjwh2" podUID="55cb3708-b450-4674-a8b0-9b632af58c9f" Jul 14 22:44:35.221742 kubelet[2204]: E0714 22:44:35.221703 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:35.223323 env[1334]: time="2025-07-14T22:44:35.223281729Z" level=info msg="CreateContainer within sandbox \"5abc934cf3bcc4bbff32bcfcf4e76cdaea08330492a7a0c2f92fa207408f49b8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 22:44:35.358866 kubelet[2204]: I0714 22:44:35.358819 2204 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-14T22:44:35Z","lastTransitionTime":"2025-07-14T22:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 14 22:44:35.547443 env[1334]: time="2025-07-14T22:44:35.547296805Z" level=info msg="CreateContainer within sandbox \"5abc934cf3bcc4bbff32bcfcf4e76cdaea08330492a7a0c2f92fa207408f49b8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"94075eba91f5e4827e9aefb348a7709fde570977b2332d3360675e8ce319339a\"" Jul 14 22:44:35.547993 env[1334]: time="2025-07-14T22:44:35.547851147Z" level=info msg="StartContainer for \"94075eba91f5e4827e9aefb348a7709fde570977b2332d3360675e8ce319339a\"" Jul 14 22:44:35.594069 env[1334]: time="2025-07-14T22:44:35.594005838Z" level=info msg="StartContainer for \"94075eba91f5e4827e9aefb348a7709fde570977b2332d3360675e8ce319339a\" returns successfully" Jul 14 22:44:35.613215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94075eba91f5e4827e9aefb348a7709fde570977b2332d3360675e8ce319339a-rootfs.mount: Deactivated successfully. Jul 14 22:44:35.619741 env[1334]: time="2025-07-14T22:44:35.619690997Z" level=info msg="shim disconnected" id=94075eba91f5e4827e9aefb348a7709fde570977b2332d3360675e8ce319339a Jul 14 22:44:35.619893 env[1334]: time="2025-07-14T22:44:35.619741553Z" level=warning msg="cleaning up after shim disconnected" id=94075eba91f5e4827e9aefb348a7709fde570977b2332d3360675e8ce319339a namespace=k8s.io Jul 14 22:44:35.619893 env[1334]: time="2025-07-14T22:44:35.619757222Z" level=info msg="cleaning up dead shim" Jul 14 22:44:35.627073 env[1334]: time="2025-07-14T22:44:35.627017339Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:44:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4637 runtime=io.containerd.runc.v2\n" Jul 14 22:44:36.225542 kubelet[2204]: E0714 22:44:36.225509 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:36.227495 env[1334]: time="2025-07-14T22:44:36.227452422Z" level=info msg="CreateContainer within sandbox \"5abc934cf3bcc4bbff32bcfcf4e76cdaea08330492a7a0c2f92fa207408f49b8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 22:44:36.247686 env[1334]: time="2025-07-14T22:44:36.247591984Z" level=info msg="CreateContainer within sandbox \"5abc934cf3bcc4bbff32bcfcf4e76cdaea08330492a7a0c2f92fa207408f49b8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3a73d23895f8b30579d97e60df437b2d2804000bc66642b753c2aa52270f619b\"" Jul 14 22:44:36.248411 env[1334]: time="2025-07-14T22:44:36.248375130Z" level=info msg="StartContainer for \"3a73d23895f8b30579d97e60df437b2d2804000bc66642b753c2aa52270f619b\"" Jul 14 22:44:36.293560 env[1334]: time="2025-07-14T22:44:36.293503888Z" level=info msg="StartContainer for \"3a73d23895f8b30579d97e60df437b2d2804000bc66642b753c2aa52270f619b\" returns successfully" Jul 14 22:44:36.310540 env[1334]: time="2025-07-14T22:44:36.310479337Z" level=info msg="shim disconnected" id=3a73d23895f8b30579d97e60df437b2d2804000bc66642b753c2aa52270f619b Jul 14 22:44:36.310540 env[1334]: time="2025-07-14T22:44:36.310528320Z" level=warning msg="cleaning up after shim disconnected" id=3a73d23895f8b30579d97e60df437b2d2804000bc66642b753c2aa52270f619b namespace=k8s.io Jul 14 22:44:36.310540 env[1334]: time="2025-07-14T22:44:36.310537838Z" level=info msg="cleaning up dead shim" Jul 14 22:44:36.317291 env[1334]: time="2025-07-14T22:44:36.317205799Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:44:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4693 runtime=io.containerd.runc.v2\n" Jul 14 22:44:36.529707 kubelet[2204]: E0714 22:44:36.529470 2204 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-hjwh2" podUID="55cb3708-b450-4674-a8b0-9b632af58c9f" Jul 14 22:44:36.615843 kubelet[2204]: E0714 22:44:36.615795 2204 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 22:44:37.230728 kubelet[2204]: E0714 22:44:37.230666 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:37.233122 env[1334]: time="2025-07-14T22:44:37.233058367Z" level=info msg="CreateContainer within sandbox \"5abc934cf3bcc4bbff32bcfcf4e76cdaea08330492a7a0c2f92fa207408f49b8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 22:44:37.249607 env[1334]: time="2025-07-14T22:44:37.249538213Z" level=info msg="CreateContainer within sandbox \"5abc934cf3bcc4bbff32bcfcf4e76cdaea08330492a7a0c2f92fa207408f49b8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"165ab9fba40f50a871d968357502502bbcbf5a988c43e2d2f4f7589b583c6e66\"" Jul 14 22:44:37.250325 env[1334]: time="2025-07-14T22:44:37.250230416Z" level=info msg="StartContainer for \"165ab9fba40f50a871d968357502502bbcbf5a988c43e2d2f4f7589b583c6e66\"" Jul 14 22:44:37.297579 env[1334]: time="2025-07-14T22:44:37.297515208Z" level=info msg="StartContainer for \"165ab9fba40f50a871d968357502502bbcbf5a988c43e2d2f4f7589b583c6e66\" returns successfully" Jul 14 22:44:37.625300 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 14 22:44:38.235179 kubelet[2204]: E0714 22:44:38.235142 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:38.320374 kubelet[2204]: I0714 22:44:38.320312 2204 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-22gvb" podStartSLOduration=5.32029239 podStartE2EDuration="5.32029239s" podCreationTimestamp="2025-07-14 22:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:44:38.319859008 +0000 UTC m=+216.869380806" watchObservedRunningTime="2025-07-14 22:44:38.32029239 +0000 UTC m=+216.869814178" Jul 14 22:44:38.529939 kubelet[2204]: E0714 22:44:38.529781 2204 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-hjwh2" podUID="55cb3708-b450-4674-a8b0-9b632af58c9f" Jul 14 22:44:39.564178 kubelet[2204]: E0714 22:44:39.564132 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:40.484360 systemd-networkd[1100]: lxc_health: Link UP Jul 14 22:44:40.491455 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 14 22:44:40.490923 systemd-networkd[1100]: lxc_health: Gained carrier Jul 14 22:44:40.531895 kubelet[2204]: E0714 22:44:40.531834 2204 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-hjwh2" podUID="55cb3708-b450-4674-a8b0-9b632af58c9f" Jul 14 22:44:41.565620 kubelet[2204]: E0714 22:44:41.565575 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:41.658441 systemd-networkd[1100]: lxc_health: Gained IPv6LL Jul 14 22:44:42.242643 kubelet[2204]: E0714 22:44:42.242598 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:42.443423 systemd[1]: run-containerd-runc-k8s.io-165ab9fba40f50a871d968357502502bbcbf5a988c43e2d2f4f7589b583c6e66-runc.9JVjMy.mount: Deactivated successfully. Jul 14 22:44:42.530205 kubelet[2204]: E0714 22:44:42.530080 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:43.244451 kubelet[2204]: E0714 22:44:43.244413 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:45.529565 kubelet[2204]: E0714 22:44:45.529524 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:44:46.733907 sshd[4406]: pam_unix(sshd:session): session closed for user core Jul 14 22:44:46.736053 systemd[1]: sshd@33-10.0.0.12:22-10.0.0.1:49696.service: Deactivated successfully. Jul 14 22:44:46.737101 systemd-logind[1314]: Session 34 logged out. Waiting for processes to exit. Jul 14 22:44:46.737166 systemd[1]: session-34.scope: Deactivated successfully. Jul 14 22:44:46.737840 systemd-logind[1314]: Removed session 34.